title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Administering Red Hat Satellite | Administering Red Hat Satellite Red Hat Satellite 6.11 A guide to administering Red Hat Satellite. Red Hat Satellite Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/index |
Chapter 13. The Apache HTTP Server | Chapter 13. The Apache HTTP Server The Apache HTTP Server provides an open-source HTTP server with the current HTTP standards. [14] In Red Hat Enterprise Linux, the httpd package provides the Apache HTTP Server. Enter the following command to see if the httpd package is installed: If it is not installed and you want to use the Apache HTTP Server, use the yum utility as the root user to install it: 13.1. The Apache HTTP Server and SELinux When SELinux is enabled, the Apache HTTP Server ( httpd ) runs confined by default. Confined processes run in their own domains, and are separated from other confined processes. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. The following example demonstrates the httpd processes running in their own domain. This example assumes the httpd , setroubleshoot , setroubleshoot-server and policycoreutils-python packages are installed: Run the getenforce command to confirm SELinux is running in enforcing mode: The command returns Enforcing when SELinux is running in enforcing mode. Enter the following command as root to start httpd : Confirm that the service is running. The output should include the information below (only the time stamp will differ): To view the httpd processes, execute the following command: The SELinux context associated with the httpd processes is system_u:system_r:httpd_t:s0 . The second last part of the context, httpd_t , is the type. A type defines a domain for processes and a type for files. In this case, the httpd processes are running in the httpd_t domain. SELinux policy defines how processes running in confined domains (such as httpd_t ) interact with files, other processes, and the system in general. Files must be labeled correctly to allow httpd access to them. For example, httpd can read files labeled with the httpd_sys_content_t type, but cannot write to them, even if Linux (DAC) permissions allow write access. Booleans must be enabled to allow certain behavior, such as allowing scripts network access, allowing httpd access to NFS and CIFS volumes, and httpd being allowed to execute Common Gateway Interface (CGI) scripts. When the /etc/httpd/conf/httpd.conf file is configured so httpd listens on a port other than TCP ports 80, 443, 488, 8008, 8009, or 8443, the semanage port command must be used to add the new port number to SELinux policy configuration. The following example demonstrates configuring httpd to listen on a port that is not already defined in SELinux policy configuration for httpd , and, as a consequence, httpd failing to start. This example also demonstrates how to then configure the SELinux system to allow httpd to successfully listen on a non-standard port that is not already defined in the policy. This example assumes the httpd package is installed. Run each command in the example as the root user: Enter the following command to confirm httpd is not running: If the output differs, stop the process: Use the semanage utility to view the ports SELinux allows httpd to listen on: Edit the /etc/httpd/conf/httpd.conf file as root. Configure the Listen option so it lists a port that is not configured in SELinux policy configuration for httpd . In this example, httpd is configured to listen on port 12345: Enter the following command to start httpd : An SELinux denial message similar to the following is logged: For SELinux to allow httpd to listen on port 12345, as used in this example, the following command is required: Start httpd again and have it listen on the new port: Now that SELinux has been configured to allow httpd to listen on a non-standard port (TCP 12345 in this example), httpd starts successfully on this port. To prove that httpd is listening and communicating on TCP port 12345, open a telnet connection to the specified port and issue a HTTP GET command, as follows: [14] For more information, see the section named The Apache HTTP Sever in the System Administrator's Guide . | [
"~]USD rpm -q httpd package httpd is not installed",
"~]# yum install httpd",
"~]USD getenforce Enforcing",
"~]# systemctl start httpd.service",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Mon 2013-08-05 14:00:55 CEST; 8s ago",
"~]USD ps -eZ | grep httpd system_u:system_r:httpd_t:s0 19780 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19781 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19782 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19783 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19784 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 19785 ? 00:00:00 httpd",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: inactive (dead)",
"~]# systemctl stop httpd.service",
"~]# semanage port -l | grep -w http_port_t http_port_t tcp 80, 443, 488, 8008, 8009, 8443",
"Change this to Listen on specific IP addresses as shown below to prevent Apache from glomming onto all bound IP addresses (0.0.0.0) # #Listen 12.34.56.78:80 Listen 127.0.0.1:12345",
"~]# systemctl start httpd.service Job for httpd.service failed. See 'systemctl status httpd.service' and 'journalctl -xn' for details.",
"setroubleshoot: SELinux is preventing the httpd (httpd_t) from binding to port 12345. For complete SELinux messages. run sealert -l f18bca99-db64-4c16-9719-1db89f0d8c77",
"~]# semanage port -a -t http_port_t -p tcp 12345",
"~]# systemctl start httpd.service",
"~]# telnet localhost 12345 Trying 127.0.0.1 Connected to localhost. Escape character is '^]'. GET / HTTP/1.0 HTTP/1.1 200 OK Date: Wed, 02 Dec 2009 14:36:34 GMT Server: Apache/2.2.13 (Red Hat) Accept-Ranges: bytes Content-Length: 3985 Content-Type: text/html; charset=UTF-8 [...continues...]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-the_apache_http_server |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/amq_streams_on_openshift_overview/making-open-source-more-inclusive |
Chapter 11. Known issues | Chapter 11. Known issues This part describes known issues in Red Hat Enterprise Linux 8.9. 11.1. Installer and image creation During RHEL installation on IBM Z, udev does not assign predictable interface names to RoCE cards enumerated by FID If you start a RHEL 8.7 or later installation with the net.naming-scheme=rhel-8.7 kernel command-line option, the udev device manager on the RHEL installation media ignores this setting for RoCE cards enumerated by the function identifier (FID). As a consequence, udev assigns unpredictable interface names to these devices. There is no workaround during the installation, but you can configure the feature after the installation. For further details, see Determining a predictable RoCE device name on the IBM Z platform . (JIRA:RHEL-11397) Installation fails on IBM Power 10 systems with LPAR and secure boot enabled RHEL installer is not integrated with static key secure boot on IBM Power 10 systems. Consequently, when logical partition (LPAR) is enabled with the secure boot option, the installation fails with the error, Unable to proceed with RHEL-x.x Installation . To work around this problem, install RHEL without enabling secure boot. After booting the system: Copy the signed Kernel into the PReP partition using the dd command. Restart the system and enable secure boot. Once the firmware verifies the bootloader and the kernel, the system boots up successfully. For more information, see https://www.ibm.com/support/pages/node/6528884 Bugzilla:2025814 [1] Unexpected SELinux policies on systems where Anaconda is running as an application When Anaconda is running as an application on an already installed system (for example to perform another installation to an image file using the -image anaconda option), the system is not prohibited to modify the SELinux types and attributes during installation. As a consequence, certain elements of SELinux policy might change on the system where Anaconda is running. To work around this problem, do not run Anaconda on the production system. Instead, run Anaconda in a temporary virtual machine to keep the SELinux policy unchanged on a production system. Running anaconda as part of the system installation process such as installing from boot.iso or dvd.iso is not affected by this issue. Bugzilla:2050140 The auth and authconfig Kickstart commands require the AppStream repository The authselect-compat package is required by the auth and authconfig Kickstart commands during installation. Without this package, the installation fails if auth or authconfig are used. However, by design, the authselect-compat package is only available in the AppStream repository. To work around this problem, verify that the BaseOS and AppStream repositories are available to the installation program or use the authselect Kickstart command during installation. Bugzilla:1640697 [1] The reboot --kexec and inst.kexec commands do not provide a predictable system state Performing a RHEL installation with the reboot --kexec Kickstart command or the inst.kexec kernel boot parameters do not provide the same predictable system state as a full reboot. As a consequence, switching to the installed system without rebooting can produce unpredictable results. Note that the kexec feature is deprecated and will be removed in a future release of Red Hat Enterprise Linux. Bugzilla:1697896 [1] The USB CD-ROM drive is not available as an installation source in Anaconda Installation fails when the USB CD-ROM drive is the source for it and the Kickstart ignoredisk --only-use= command is specified. In this case, Anaconda cannot find and use this source disk. To work around this problem, use the harddrive --partition=sdX --dir=/ command to install from USB CD-ROM drive. As a result, the installation does not fail. Jira:RHEL-4707 Network access is not enabled by default in the installation program Several installation features require network access, for example, registration of a system using the Content Delivery Network (CDN), NTP server support, and network installation sources. However, network access is not enabled by default, and as a result, these features cannot be used until network access is enabled. To work around this problem, add ip=dhcp to boot options to enable network access when the installation starts. Optionally, passing a Kickstart file or a repository located on the network using boot options also resolves the problem. As a result, the network-based installation features can be used. Bugzilla:1757877 [1] Hard drive partitioned installations with iso9660 filesystem fails You cannot install RHEL on systems where the hard drive is partitioned with the iso9660 filesystem. This is due to the updated installation code that is set to ignore any hard disk containing a iso9660 file system partition. This happens even when RHEL is installed without using a DVD. To workaround this problem, add the following script in the Kickstart file to format the disc before the installation starts. Note: Before performing the workaround, backup the data available on the disk. The wipefs command formats all the existing data from the disk. As a result, installations work as expected without any errors. Jira:RHEL-4711 IBM Power systems with HASH MMU mode fail to boot with memory allocation failures IBM Power Systems with HASH memory allocation unit (MMU) mode support kdump up to a maximum of 192 cores. Consequently, the system fails to boot with memory allocation failures if kdump is enabled on more than 192 cores. This limitation is due to RMA memory allocations during early boot in HASH MMU mode. To work around this problem, use the Radix MMU mode with fadump enabled instead of using kdump . Bugzilla:2028361 [1] RHEL for Edge installer image fails to create mount points when installing an rpm-ostree payload When deploying rpm-ostree payloads, used for example in a RHEL for Edge installer image, the installer does not properly create some mount points for custom partitions. As a consequence, the installation is aborted with the following error: To work around this issue: Use an automatic partitioning scheme and do not add any mount points manually. Manually assign mount points only inside /var directory. For example, /var/ my-mount-point ), and the following standard directories: / , /boot , /var . As a result, the installation process finishes successfully. Jira:RHEL-4744 Images built with the stig profile remediation fails to boot with FIPS error FIPS mode is not supported by RHEL image builder. When using RHEL image builder customized with the xccdf_org.ssgproject.content_profile_stig profile remediation, the system fails to boot with the following error: Enabling the FIPS policy manually after the system image installation with the fips-mode-setup --enable command does not work, because the /boot directory is on a different partition. System boots successfully if FIPS is disabled. Currently, there is no workaround available. Note You can manually enable FIPS after installing the image by using the fips-mode-setup --enable command. Jira:RHEL-4649 11.2. Security sshd -T provides inaccurate information about Ciphers, MACs and KeX algorithms The output of the sshd -T command does not contain the system-wide crypto policy configuration or other options that could come from an environment file in /etc/sysconfig/sshd and that are applied as arguments on the sshd command. This occurs because the upstream OpenSSH project did not support the Include directive to support Red-Hat-provided cryptographic defaults in RHEL 8. Crypto policies are applied as command-line arguments to the sshd executable in the sshd.service unit during the service's start by using an EnvironmentFile . To work around the problem, use the source command with the environment file and pass the crypto policy as an argument to the sshd command, as in sshd -T USDCRYPTO_POLICY . For additional information, see Ciphers, MACs or KeX algorithms differ from sshd -T to what is provided by current crypto policy level . As a result, the output from sshd -T matches the currently configured crypto policy. Bugzilla:2044354 [1] RHV hypervisor may not work correctly when hardening the system during installation When installing Red Hat Virtualization Hypervisor (RHV-H) and applying the Red Hat Enterprise Linux 8 STIG profile, OSCAP Anaconda Add-on may harden the system as RHEL instead of RVH-H and remove essential packages for RHV-H. Consequently, the RHV hypervisor may not work. To work around the problem, install the RHV-H system without applying any profile hardening, and after the installation is complete, apply the profile by using OpenSCAP. As a result, the RHV hypervisor works correctly. Jira:RHEL-1826 CVE OVAL feeds are now only in the compressed format, and data streams are not in the SCAP 1.3 standard Red Hat provides CVE OVAL feeds in the bzip2-compressed format and are no longer available in the XML file format. Because referencing compressed content is not standardized in the Security Content Automation Protocol (SCAP) 1.3 specification, third-party SCAP scanners can have problems scanning rules that use the feed. Bugzilla:2028428 Certain Rsyslog priority strings do not work correctly Support for the GnuTLS priority string for imtcp that allows fine-grained control over encryption is not complete. Consequently, the following priority strings do not work properly in the Rsyslog remote logging application: To work around this problem, use only correctly working priority strings: As a result, current configurations must be limited to the strings that work correctly. Bugzilla:1679512 Server with GUI and Workstation installations are not possible with CIS Server profiles The CIS Server Level 1 and Level 2 security profiles are not compatible with the Server with GUI and Workstation software selections. As a consequence, a RHEL 8 installation with the Server with GUI software selection and CIS Server profiles is not possible. An attempted installation using the CIS Server Level 1 or Level 2 profiles and either of these software selections will generate the error message: If you need to align systems with the Server with GUI or Workstation software selections according to CIS benchmarks, use the CIS Workstation Level 1 or Level 2 profiles instead. Bugzilla:1843932 Kickstart uses org_fedora_oscap instead of com_redhat_oscap in RHEL 8 The Kickstart references the Open Security Content Automation Protocol (OSCAP) Anaconda add-on as org_fedora_oscap instead of com_redhat_oscap , which might cause confusion. This is necessary to keep compatibility with Red Hat Enterprise Linux 7. Bugzilla:1665082 [1] libvirt overrides xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding The libvirt virtualization framework enables IPv4 forwarding whenever a virtual network with a forward mode of route or nat is started. This overrides the configuration by the xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding rule, and subsequent compliance scans report the fail result when assessing this rule. Apply one of these scenarios to work around the problem: Uninstall the libvirt packages if your scenario does not require them. Change the forwarding mode of virtual networks created by libvirt . Remove the xccdf_org.ssgproject.content_rule_sysctl_net_ipv4_conf_all_forwarding rule by tailoring your profile. Bugzilla:2118758 The fapolicyd utility incorrectly allows executing changed files Correctly, the IMA hash of a file should update after any change to the file, and fapolicyd should prevent execution of the changed file. However, this does not happen due to differences in IMA policy setup and in file hashing by the evctml utility. As a result, the IMA hash is not updated in the extended attribute of a changed file. Consequently, fapolicyd incorrectly allows the execution of the changed file. Jira:RHEL-520 [1] OpenSSL in FIPS mode accepts only specific D-H parameters In FIPS mode, TLS clients that use OpenSSL return a bad dh value error and abort TLS connections to servers that use manually generated parameters. This is because OpenSSL, when configured to work in compliance with FIPS 140-2, works only with Diffie-Hellman parameters compliant to NIST SP 800-56A rev3 Appendix D (groups 14, 15, 16, 17, and 18 defined in RFC 3526 and with groups defined in RFC 7919). Also, servers that use OpenSSL ignore all other parameters and instead select known parameters of similar size. To work around this problem, use only the compliant groups. Bugzilla:1810911 [1] crypto-policies incorrectly allow Camellia ciphers The RHEL 8 system-wide cryptographic policies should disable Camellia ciphers in all policy levels, as stated in the product documentation. However, the Kerberos protocol enables the ciphers by default. To work around the problem, apply the NO-CAMELLIA subpolicy: In the command, replace DEFAULT with the cryptographic level name if you have switched from DEFAULT previously. As a result, Camellia ciphers are correctly disallowed across all applications that use system-wide crypto policies only when you disable them through the workaround. Bugzilla:1919155 OpenSC might not detect CardOS V5.3 card objects correctly The OpenSC toolkit does not correctly read cache from different PKCS #15 file offsets used in some CardOS V5.3 cards. Consequently, OpenSC might not be able to list card objects and prevent using them from different applications. To work around the problem, turn off file caching by setting the use_file_caching = false option in the /etc/opensc.conf file. Jira:RHEL-4077 Smart-card provisioning process through OpenSC pkcs15-init does not work properly The file_caching option is enabled in the default OpenSC configuration, and the file caching functionality does not handle some commands from the pkcs15-init tool properly. Consequently, the smart-card provisioning process through OpenSC fails. To work around the problem, add the following snippet to the /etc/opensc.conf file: The smart-card provisioning through pkcs15-init only works if you apply the previously described workaround. Bugzilla:1947025 Connections to servers with SHA-1 signatures do not work with GnuTLS SHA-1 signatures in certificates are rejected by the GnuTLS secure communications library as insecure. Consequently, applications that use GnuTLS as a TLS backend cannot establish a TLS connection to peers that offer such certificates. This behavior is inconsistent with other system cryptographic libraries. To work around this problem, upgrade the server to use certificates signed with SHA-256 or stronger hash, or switch to the LEGACY policy. Bugzilla:1628553 [1] libselinux-python is available only through its module The libselinux-python package contains only Python 2 bindings for developing SELinux applications and it is used for backward compatibility. For this reason, libselinux-python is no longer available in the default RHEL 8 repositories through the yum install libselinux-python command. To work around this problem, enable both the libselinux-python and python27 modules, and install the libselinux-python package and its dependencies with the following commands: Alternatively, install libselinux-python using its install profile with a single command: As a result, you can install libselinux-python using the respective module. Bugzilla:1666328 [1] udica processes UBI 8 containers only when started with --env container=podman The Red Hat Universal Base Image 8 (UBI 8) containers set the container environment variable to the oci value instead of the podman value. This prevents the udica tool from analyzing a container JavaScript Object Notation (JSON) file. To work around this problem, start a UBI 8 container using a podman command with the --env container=podman parameter. As a result, udica can generate an SELinux policy for a UBI 8 container only when you use the described workaround. Bugzilla:1763210 Negative effects of the default logging setup on performance The default logging environment setup might consume 4 GB of memory or even more and adjustments of rate-limit values are complex when systemd-journald is running with rsyslog . See the Negative effects of the RHEL default logging setup on performance and their mitigations Knowledgebase article for more information. Jira:RHELPLAN-10431 [1] SELINUX=disabled in /etc/selinux/config does not work properly Disabling SELinux using the SELINUX=disabled option in the /etc/selinux/config results in a process in which the kernel boots with SELinux enabled and switches to disabled mode later in the boot process. This might cause memory leaks. To work around this problem, disable SELinux by adding the selinux=0 parameter to the kernel command line as described in the Changing SELinux modes at boot time section of the Using SELinux title if your scenario really requires to completely disable SELinux. Jira:RHELPLAN-34199 [1] IKE over TCP connections do not work on custom TCP ports The tcp-remoteport Libreswan configuration option does not work properly. Consequently, an IKE over TCP connection cannot be established when a scenario requires specifying a non-default TCP port. Bugzilla:1989050 scap-security-guide cannot configure termination of idle sessions Even though the sshd_set_idle_timeout rule still exists in the data stream, the former method for idle session timeout of configuring sshd is no longer available. Therefore, the rule is marked as not applicable and cannot harden anything. Other methods for configuring idle session termination, such as systemd (Logind), are also not available. As a consequence, scap-security-guide cannot configure the system to reliably disconnect idle sessions after a certain amount of time. You can work around this problem in one of the following ways, which might fulfill the security requirement: Configuring the accounts_tmout rule. However, this variable could be overridden by using the exec command. Configuring the configure_tmux_lock_after_time and configure_bashrc_exec_tmux rules. This requires installing the tmux package. Upgrading to RHEL 8.7 or later where the systemd feature is already implemented together with the proper SCAP rule. Jira:RHEL-1804 The OSCAP Anaconda add-on does not fetch tailored profiles in the graphical installation The OSCAP Anaconda add-on does not provide an option to select or deselect tailoring of security profiles in the RHEL graphical installation. Starting from RHEL 8.8, the add-on does not take tailoring into account by default when installing from archives or RPM packages. Consequently, the installation displays the following error message instead of fetching an OSCAP tailored profile: To work around this problem, you must specify paths in the %addon org_fedora_oscap section of your Kickstart file, for example: As a result, you can use the graphical installation for OSCAP tailored profiles only with the corresponding Kickstart specifications. Jira:RHEL-1810 OpenSCAP memory-consumption problems On systems with limited memory, the OpenSCAP scanner might stop prematurely or it might not generate the results files. To work around this problem, you can customize the scanning profile to deselect rules that involve recursion over the entire / file system: rpm_verify_hashes rpm_verify_permissions rpm_verify_ownership file_permissions_unauthorized_world_writable no_files_unowned_by_user dir_perms_world_writable_system_owned file_permissions_unauthorized_suid file_permissions_unauthorized_sgid file_permissions_ungroupowned dir_perms_world_writable_sticky_bits For more details and more workarounds, see the related Knowledgebase article . Bugzilla:2161499 Rebuilding the rpm database assigns incorrect SELinux labeling Rebuilding the rpm database with the rpmdb --rebuilddb command assigns incorrect SELinux labels to the rpm database files. As a consequence, some services that use the rpm database might not work correctly. To work around this problem after rebuilding the database, relabel the database by using the restorecon -Rv /var/lib/rpm command. Bugzilla:2166153 ANSSI BP28 HP SCAP rules for Audit are incorrectly used on the 64-bit ARM architecture The ANSSI BP28 High profile in the SCAP Security Guide (SSG) contains the following security content automation protocol (SCAP) rules that configure the Linux Audit subsystem but are invalid on the 64-bit ARM architecture: audit_rules_unsuccessful_file_modification_creat audit_rules_unsuccessful_file_modification_open audit_rules_file_deletion_events_rename audit_rules_file_deletion_events_rmdir audit_rules_file_deletion_events_unlink audit_rules_dac_modification_chmod audit_rules_dac_modification_chown audit_rules_dac_modification_lchown If you configure your RHEL system running on a 64-bit ARM machine by using this profile, the Audit daemon does not start due to the use of invalid system calls. To work around the problem, either use profile tailoring to remove the previously mentioned rules from the data stream or remove the -S <syscall> snippets by editing files in the /etc/audit/rules.d directory. The files must not contain the following system calls: creat open rename rmdir unlink chmod chown lchown As a result of any of the two described workarounds, the Audit daemon can start even after you use the ANSSI BP28 High profile on a 64-bit ARM system. Jira:RHEL-1897 Remediating service-related rules during kickstart installations might fail During a kickstart installation, the OpenSCAP utility sometimes incorrectly shows that a service enable or disable state remediation is not needed. Consequently, OpenSCAP might set the services on the installed system to a non-compliant state. As a workaround, you can scan and remediate the system after the kickstart installation. This will fix the service-related issues. Bugzilla:1834716 11.3. Subscription management syspurpose addons have no effect on the subscription-manager attach --auto output In Red Hat Enterprise Linux 8, four attributes of the syspurpose command-line tool have been added: role , usage , service_level_agreement and addons . Currently, only role , usage and service_level_agreement affect the output of running the subscription-manager attach --auto command. Users who attempt to set values to the addons argument will not observe any effect on the subscriptions that are auto-attached. Bugzilla:1687900 11.4. Software management cr_compress_file_with_stat() can cause a memory leak The createrepo_c C library has the API cr_compress_file_with_stat() function. This function is declared with char **dst as a second parameter. Depending on its other parameters, cr_compress_file_with_stat() either uses dst as an input parameter, or uses it to return an allocated string. This unpredictable behavior can cause a memory leak, because it does not inform the user when to free dst contents. To work around this problem, a new API cr_compress_file_with_stat_v2 function has been added, which uses the dst parameter only as an input. It is declared as char *dst . This prevents memory leak. Note that the cr_compress_file_with_stat_v2 function is temporary and will be present only in RHEL 8. Later, cr_compress_file_with_stat() will be fixed instead. Bugzilla:1973588 [1] YUM transactions reported as successful when a scriptlet fails Since RPM version 4.6, post-install scriptlets are allowed to fail without being fatal to the transaction. This behavior propagates up to YUM as well. This results in scriptlets which might occasionally fail while the overall package transaction reports as successful. There is no workaround available at the moment. Note that this is expected behavior that remains consistent between RPM and YUM. Any issues in scriptlets should be addressed at the package level. Bugzilla:1986657 11.5. Shells and command-line tools ipmitool is incompatible with certain server platforms The ipmitool utility serves for monitoring, configuring, and managing devices that support the Intelligent Platform Management Interface (IPMI). The current version of ipmitool uses Cipher Suite 17 by default instead of the Cipher Suite 3. Consequently, ipmitool fails to communicate with certain bare metal nodes that announced support for Cipher Suite 17 during negotiation, but do not actually support this cipher suite. As a result, ipmitool aborts with the no matching cipher suite error message. For more details, see the related Knowledgebase article . To solve this problem, update your baseboard management controller (BMC) firmware to use the Cipher Suite 17. Optionally, if the BMC firmware update is not available, you can work around this problem by forcing ipmitool to use a certain cipher suite. When invoking a managing task with ipmitool , add the -C option to the ipmitool command together with the number of the cipher suite you want to use. See the following example: Jira:RHEL-6846 ReaR fails to recreate a volume group when you do not use clean disks for restoring ReaR fails to perform recovery when you want to restore to disks that contain existing data. To work around this problem, wipe the disks manually before restoring to them if they have been previously used. To wipe the disks in the rescue environment, use one of the following commands before running the rear recover command: The dd command to overwrite the disks. The wipefs command with the -a flag to erase all available metadata. See the following example of wiping metadata from the /dev/sda disk: This command wipes the metadata from the partitions on /dev/sda first, and then the partition table itself. Bugzilla:1925531 coreutils might report misleading EPERM error codes GNU Core Utilities ( coreutils ) started using the statx() system call. If a seccomp filter returns an EPERM error code for unknown system calls, coreutils might consequently report misleading EPERM error codes because EPERM can not be distinguished from the actual Operation not permitted error returned by a working statx() syscall. To work around this problem, update the seccomp filter to either permit the statx() syscall, or to return an ENOSYS error code for syscalls it does not know. Bugzilla:2030661 The %vmeff metric from the sysstat package displays incorrect values The sysstat package provides the %vmeff metric to measure the page reclaim efficiency. The values of the %vmeff column returned by the sar -B command are incorrect because sysstat does not parse all relevant /proc/vmstat values provided by later kernel versions. To work around this problem, you can calculate the %vmeff value manually from the /proc/vmstat file. For details, see Why the sar(1) tool reports %vmeff values beyond 100 % in RHEL 8 and RHEL 9? Jira:RHEL-12008 11.6. Infrastructure services Postfix TLS fingerprint algorithm in the FIPS mode needs to be changed to SHA-256 By default in RHEL 8, postfix uses MD5 fingerprints with the TLS for backward compatibility. But in the FIPS mode, the MD5 hashing function is not available, which may cause TLS to incorrectly function in the default postfix configuration. To work around this problem, the hashing function needs to be changed to SHA-256 in the postfix configuration file. For more details, see the related Knowledgebase article Fix postfix TLS in the FIPS mode by switching to SHA-256 instead of MD5 . Bugzilla:1711885 The brltty package is not multilib compatible It is not possible to have both 32-bit and 64-bit versions of the brltty package installed. You can either install the 32-bit ( brltty.i686 ) or the 64-bit ( brltty.x86_64 ) version of the package. The 64-bit version is recommended. Bugzilla:2008197 11.7. Networking RoCE interfaces lose their IP settings due to an unexpected change of the network interface name The RDMA over Converged Ethernet (RoCE) interfaces lose their IP settings due to an unexpected change of the network interface name if both conditions are met: User upgrades from a RHEL 8.6 system or earlier. The RoCE card is enumerated by UID. To work around this problem: Create the /etc/systemd/network/98-rhel87-s390x.link file with the following content: Reboot the system for the changes to take effect. Upgrade to RHEL 8.7 or newer. Note that RoCE interfaces that are enumerated by function ID (FID) and are non-unique, will still use unpredictable interface names unless you set the net.naming-scheme=rhel-8.7 kernel parameter. In this case, the RoCE interfaces will switch to predictable names with the ens prefix. Jira:RHEL-11398 [1] Systems with the IPv6_rpfilter option enabled experience low network throughput Systems with the IPv6_rpfilter option enabled in the firewalld.conf file currently experience suboptimal performance and low network throughput in high traffic scenarios, such as 100 Gbps links. To work around the problem, disable the IPv6_rpfilter option. To do so, add the following line in the /etc/firewalld/firewalld.conf file. As a result, the system performs better, but also has reduced security. Bugzilla:1871860 [1] 11.8. Kernel The kernel ACPI driver reports it has no access to a PCIe ECAM memory region The Advanced Configuration and Power Interface (ACPI) table provided by firmware does not define a memory region on the PCI bus in the Current Resource Settings (_CRS) method for the PCI bus device. Consequently, the following warning message occurs during the system boot: However, the kernel is still able to access the 0x30000000-0x31ffffff memory region, and can assign that memory region to the PCI Enhanced Configuration Access Mechanism (ECAM) properly. You can verify that PCI ECAM works correctly by accessing the PCIe configuration space over the 256 byte offset with the following output: As a result, you can ignore the warning message. For more information about the problem, see the "Firmware Bug: ECAM area mem 0x30000000-0x31ffffff not reserved in ACPI namespace" appears during system boot solution. Bugzilla:1868526 [1] The tuned-adm profile powersave command causes the system to become unresponsive Executing the tuned-adm profile powersave command leads to an unresponsive state of the Penguin Valkyrie 2000 2-socket systems with the older Thunderx (CN88xx) processors. Consequently, reboot the system to resume working. To work around this problem, avoid using the powersave profile if your system matches the mentioned specifications. Bugzilla:1609288 [1] The HP NMI watchdog does not always generate a crash dump In certain cases, the hpwdt driver for the HP NMI watchdog is not able to claim a non-maskable interrupt (NMI) generated by the HPE watchdog timer because the NMI was instead consumed by the perfmon driver. The missing NMI is initiated by one of two conditions: The Generate NMI button on the Integrated Lights-Out (iLO) server management software. This button is triggered by a user. The hpwdt watchdog. The expiration by default sends an NMI to the server. Both sequences typically occur when the system is unresponsive. Under normal circumstances, the NMI handler for both these situations calls the kernel panic() function and if configured, the kdump service generates a vmcore file. Because of the missing NMI, however, kernel panic() is not called and vmcore is not collected. In the first case (1.), if the system was unresponsive, it remains so. To work around this scenario, use the virtual Power button to reset or power cycle the server. In the second case (2.), the missing NMI is followed 9 seconds later by a reset from the Automated System Recovery (ASR). The HPE Gen9 Server line experiences this problem in single-digit percentages. The Gen10 at an even smaller frequency. Bugzilla:1602962 [1] Reloading an identical crash extension may cause segmentation faults When you load a copy of an already loaded crash extension file, it might trigger a segmentation fault. Currently, the crash utility detects if an original file has been loaded. Consequently, due to two identical files co-existing in the crash utility, a namespace collision occurs, which triggers the crash utility to cause a segmentation fault. You can work around the problem by loading the crash extension file only once. As a result, segmentation faults no longer occur in the described scenario. Bugzilla:1906482 Connections fail when attaching a virtual function to virtual machine Pensando network cards that use the ionic device driver silently accept VLAN tag configuration requests and attempt configuring network connections while attaching network virtual functions ( VF ) to a virtual machine ( VM ). Such network connections fail as this feature is not yet supported by the card's firmware. Bugzilla:1930576 [1] The OPEN MPI library may trigger run-time failures with default PML In OPEN Message Passing Interface (OPEN MPI) implementation 4.0.x series, Unified Communication X (UCX) is the default point-to-point communicator (PML). The later versions of OPEN MPI 4.0.x series deprecated openib Byte Transfer Layer (BTL). However, OPEN MPI, when run over a homogeneous cluster (same hardware and software configuration), UCX still uses openib BTL for MPI one-sided operations. As a consequence, this may trigger execution errors. To work around this problem: Run the mpirun command using following parameters: where, The -mca btl openib parameter disables openib BTL The -mca pml ucx parameter configures OPEN MPI to use ucx PML. The x UCX_NET_DEVICES= parameter restricts UCX to use the specified devices The OPEN MPI, when run over a heterogeneous cluster (different hardware and software configuration), it uses UCX as the default PML. As a consequence, this may cause the OPEN MPI jobs to run with erratic performance, unresponsive behavior, or crash failures. To work around this problem, set the UCX priority as: Run the mpirun command using following parameters: As a result, the OPEN MPI library is able to choose an alternative available transport layer over UCX. Bugzilla:1866402 [1] vmcore capture fails after memory hot-plug or unplug operation After performing the memory hot-plug or hot-unplug operation, the event comes after updating the device tree which contains memory layout information. Thereby the makedumpfile utility tries to access a non-existent physical address. The problem appears if all of the following conditions meet: A little-endian variant of IBM Power System runs RHEL 8. The kdump or fadump service is enabled on the system. Consequently, the capture kernel fails to save vmcore if a kernel crash is triggered after the memory hot-plug or hot-unplug operation. To work around this problem, restart the kdump service after hot-plug or hot-unplug: As a result, vmcore is successfully saved in the described scenario. Bugzilla:1793389 [1] Using irqpoll causes vmcore generation failure Due to an existing problem with the nvme driver on the 64-bit ARM architecture that run on the Amazon Web Services Graviton 1 processor, causes vmcore generation to fail when you provide the irqpoll kernel command line parameter to the first kernel. Consequently, no vmcore file is dumped in the /var/crash/ directory upon a kernel crash. To work around this problem: Append irqpoll to KDUMP_COMMANDLINE_REMOVE variable in the /etc/sysconfig/kdump file. Remove irqpoll from KDUMP_COMMANDLINE_APPEND variable in the /etc/sysconfig/kdump file. Restart the kdump service: As a result, the first kernel boots correctly and the vmcore file is expected to be captured upon the kernel crash. Note that the Amazon Web Services Graviton 2 and Amazon Web Services Graviton 3 processors do not require you to manually remove the irqpoll parameter in the /etc/sysconfig/kdump file. The kdump service can use a significant amount of crash kernel memory to dump the vmcore file. Ensure that the capture kernel has sufficient memory available for the kdump service. For related information on this Known Issue, see The irqpoll kernel command line parameter might cause vmcore generation failure article. Bugzilla:1654962 [1] Hardware certification of the real-time kernel on systems with large core-counts might require passing the skew-tick=1 boot parameter Large or moderate sized systems with numerous sockets and large core-counts can experience latency spikes due to lock contentions on xtime_lock , which is used in the timekeeping system. As a consequence, latency spikes and delays in hardware certifications might occur on multiprocessing systems. As a workaround, you can offset the timer tick per CPU to start at a different time by adding the skew_tick=1 boot parameter. To avoid lock conflicts, enable skew_tick=1 : Enable the skew_tick=1 parameter with grubby . Reboot for changes to take effect. Verify the new settings by displaying the kernel parameters you pass during boot. Note that enabling skew_tick=1 causes a significant increase in power consumption and, therefore, it must be enabled only if you are running latency sensitive real-time workloads. Jira:RHEL-9318 [1] Debug kernel fails to boot in crash capture environment on RHEL 8 Due to the memory-intensive nature of the debug kernel, a problem occurs when the debug kernel is in use and a kernel panic is triggered. As a consequence, the debug kernel is not able to boot as the capture kernel and a stack trace is generated instead. To work around this problem, increase the crash kernel memory as required. As a result, the debug kernel boots successfully in the crash capture environment. Bugzilla:1659609 [1] Allocating crash kernel memory fails at boot time On some Ampere Altra systems, allocating the crash kernel memory during boot fails when the 32-bit region is disabled in BIOS settings. Consequently, the kdump service fails to start. This is caused by memory fragmentation in the region below 4 GB with no fragment being large enough to contain the crash kernel memory. To work around this problem, enable the 32-bit memory region in BIOS as follows: Open the BIOS settings on your system. Open the Chipset menu. Under Memory Configuration , enable the Slave 32-bit option. As a result, crash kernel memory allocation within the 32-bit region succeeds and the kdump service works as expected. Bugzilla:1940674 [1] The QAT manager leaves no spare device for LKCF The Intel(R) QuickAssist Technology (QAT) manager ( qatmgr ) is a user space process, which by default uses all QAT devices in the system. As a consequence, there are no QAT devices left for the Linux Kernel Cryptographic Framework (LKCF). There is no need to work around this situation, as this behavior is expected and a majority of users will use acceleration from the user space. Bugzilla:1920086 [1] The Solarflare fails to create maximum number of virtual functions (VFs) The Solarflare NICs fail to create a maximum number of VFs due to insufficient resources. You can check the maximum number of VFs that a PCIe device can create in the /sys/bus/pci/devices/PCI_ID/sriov_totalvfs file. To workaround this problem, you can either adjust the number of VFs or the VF MSI interrupt value to a lower value, either from Solarflare Boot Manager on startup, or using Solarflare sfboot utility. The default VF MSI interrupt value is 8 . To adjust the VF MSI interrupt value using sfboot : Note Adjusting VF MSI interrupt value affects the VF performance. For more information about parameters to be adjusted accordingly, see the Solarflare Server Adapter user guide . Bugzilla:1971506 [1] Using page_poison=1 can cause a kernel crash When using page_poison=1 as the kernel parameter on firmware with faulty EFI implementation, the operating system can cause the kernel to crash. By default, this option is disabled and it is not recommended to enable it, especially in production systems. Bugzilla:2050411 [1] The iwl7260-firmware breaks Wi-Fi on Intel Wi-Fi 6 AX200, AX210, and Lenovo ThinkPad P1 Gen 4 After updating the iwl7260-firmware or iwl7260-wifi driver to the version provided by RHEL 8.7 and later, the hardware gets into an incorrect internal state. reports its state incorrectly. Consequently, Intel Wifi 6 cards may not work and display the error message: An unconfirmed work around is to power off the system and back on again. Do not reboot. Bugzilla:2106341 [1] Secure boot on IBM Power Systems does not support migration Currently, on IBM Power Systems, logical partition (LPAR) does not boot after successful physical volume (PV) migration. As a result, any type of automated migration with secure boot enabled on a partition fails. Bugzilla:2126777 [1] weak-modules from kmod fails to work with module inter-dependencies The weak-modules script provided by the kmod package determines which modules are kABI-compatible with installed kernels. However, while checking modules' kernel compatibility, weak-modules processes modules symbol dependencies from higher to lower release of the kernel for which they were built. As a consequence, modules with inter-dependencies built against different kernel releases might be interpreted as non-compatible, and therefore the weak-modules script fails to work in this scenario. To work around the problem, build or put the extra modules against the latest stock kernel before you install the new kernel. Bugzilla:2103605 [1] kdump in Ampere Altra servers enters the OOM state The firmware in Ampere Altra and Altra Max servers currently causes the kernel to allocate too many event, interrupt and command queues, which consumes too much memory. As a consequence, the kdump kernel enters the Out of memory (OOM) state. To work around this problem, reserve extra memory for kdump by increasing the value of the crashkernel= kernel option to 640M . Bugzilla:2111855 [1] 11.9. File systems and storage LVM mirror devices that store a LUKS volume sometimes become unresponsive Mirrored LVM devices with a segment type of mirror that store a LUKS volume might become unresponsive under certain conditions. The unresponsive devices reject all I/O operations. To work around the issue, Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror if you need to stack LUKS volumes on top of resilient software-defined storage. The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a mirrored LVM device to a RAID1 device . Bugzilla:1730502 [1] The /boot file system cannot be placed on LVM You cannot place the /boot file system on an LVM logical volume. This limitation exists for the following reasons: On EFI systems, the EFI System Partition conventionally serves as the /boot file system. The uEFI standard requires a specific GPT partition type and a specific file system type for this partition. RHEL 8 uses the Boot Loader Specification (BLS) for system boot entries. This specification requires that the /boot file system is readable by the platform firmware. On EFI systems, the platform firmware can read only the /boot configuration defined by the uEFI standard. The support for LVM logical volumes in the GRUB 2 boot loader is incomplete. Red Hat does not plan to improve the support because the number of use cases for the feature is decreasing due to standards such as uEFI and BLS. Red Hat does not plan to support /boot on LVM. Instead, Red Hat provides tools for managing system snapshots and rollback that do not need the /boot file system to be placed on an LVM logical volume. Bugzilla:1496229 [1] LVM no longer allows creating volume groups with mixed block sizes LVM utilities such as vgcreate or vgextend no longer allow you to create volume groups (VGs) where the physical volumes (PVs) have different logical block sizes. LVM has adopted this change because file systems fail to mount if you extend the underlying logical volume (LV) with a PV of a different block size. To re-enable creating VGs with mixed block sizes, set the allow_mixed_block_sizes=1 option in the lvm.conf file. Bugzilla:1768536 Limitations of LVM writecache The writecache LVM caching method has the following limitations, which are not present in the cache method: You cannot name a writecache logical volume when using pvmove commands. You cannot use logical volumes with writecache in combination with thin pools or VDO. The following limitation also applies to the cache method: You cannot resize a logical volume while cache or writecache is attached to it. Jira:RHELPLAN-27987 [1] , Bugzilla:1808012, Bugzilla:1798631 Device-mapper multipath is not supported when using NVMe/TCP driver. The use of device-mapper multipath on top of NVMe/TCP devices can cause reduced performance and error handling. To avoid this problem, use native NVMe multipath instead of DM multipath tools. For RHEL 8, you can add the option nvme_core.multipath=Y to the kernel command line. Bugzilla:2022359 [1] The blk-availability systemd service deactivates complex device stacks In systemd , the default block deactivation code does not always handle complex stacks of virtual block devices correctly. In some configurations, virtual devices might not be removed during the shutdown, which causes error messages to be logged. To work around this problem, deactivate complex block device stacks by executing the following command: As a result, complex virtual device stacks are correctly deactivated during shutdown and do not produce error messages. Bugzilla:2011699 [1] XFS quota warnings are triggered too often Using the quota timer results in quota warnings triggering too often, which causes soft quotas to be enforced faster than they should. To work around this problem, do not use soft quotas, which will prevent triggering warnings. As a result, the amount of warning messages will not enforce soft quota limit anymore, respecting the configured timeout. Bugzilla:2059262 [1] 11.10. Dynamic programming languages, web and database servers Creating virtual Python 3.11 environments fails when using the virtualenv utility The virtualenv utility in RHEL 8, provided by the python3-virtualenv package, is not compatible with Python 3.11. An attempt to create a virtual environment by using virtualenv will fail with the following error message: To create Python 3.11 virtual environments, use the python3.11 -m venv command instead, which uses the venv module from the standard library. Bugzilla:2165702 python3.11-lxml does not provide the lxml.isoschematron submodule The python3.11-lxml package is distributed without the lxml.isoschematron submodule because it is not under an open source license. The submodule implements ISO Schematron support. As an alternative, pre-ISO-Schematron validation is available in the lxml.etree.Schematron class. The remaining content of the python3.11-lxml package is unaffected. Bugzilla:2157673 PAM plug-in version 1.0 does not work in MariaDB MariaDB 10.3 provides the Pluggable Authentication Modules (PAM) plug-in version 1.0. MariaDB 10.5 provides the plug-in versions 1.0 and 2.0, version 2.0 is the default. The MariaDB PAM plug-in version 1.0 does not work in RHEL 8. To work around this problem, use the PAM plug-in version 2.0 provided by the mariadb:10.5 module stream. Bugzilla:1942330 Symbol conflicts between OpenLDAP libraries might cause crashes in httpd When both the libldap and libldap_r libraries provided by OpenLDAP are loaded and used within a single process, symbol conflicts between these libraries might occur. Consequently, Apache httpd child processes using the PHP ldap extension might terminate unexpectedly if the mod_security or mod_auth_openidc modules are also loaded by the httpd configuration. Since the RHEL 8.3 update to the Apache Portable Runtime (APR) library, you can work around the problem by setting the APR_DEEPBIND environment variable, which enables the use of the RTLD_DEEPBIND dynamic linker option when loading httpd modules. When the APR_DEEPBIND environment variable is enabled, crashes no longer occur in httpd configurations that load conflicting libraries. Bugzilla:1819607 [1] getpwnam() might fail when called by a 32-bit application When a user of NIS uses a 32-bit application that calls the getpwnam() function, the call fails if the nss_nis.i686 package is missing. To work around this problem, manually install the missing package by using the yum install nss_nis.i686 command. Bugzilla:1803161 11.11. Identity Management Actions required when running Samba as a print server and updating from RHEL 8.4 and earlier With this update, the samba package no longer creates the /var/spool/samba/ directory. If you use Samba as a print server and use /var/spool/samba/ in the [printers] share to spool print jobs, SELinux prevents Samba users from creating files in this directory. Consequently, print jobs fail and the auditd service logs a denied message in /var/log/audit/audit.log . To avoid this problem after updating your system from 8.4 and earlier: Search the [printers] share in the /etc/samba/smb.conf file. If the share definition contains path = /var/spool/samba/ , update the setting and set the path parameter to /var/tmp/ . Restart the smbd service: If you newly installed Samba on RHEL 8.5 or later, no action is required. The default /etc/samba/smb.conf file provided by the samba-common package in this case already uses the /var/tmp/ directory to spool print jobs. Bugzilla:2009213 [1] Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system. Bugzilla:1729215 FIPS mode does not support using a shared secret to establish a cross-forest trust Establishing a cross-forest trust using a shared secret fails in FIPS mode because NTLMSSP authentication is not FIPS-compliant. To work around this problem, authenticate with an Active Directory (AD) administrative account when establishing a trust between an IdM domain with FIPS mode enabled and an AD domain. Jira:RHEL-4847 Downgrading authselect after the rebase to version 1.2.2 breaks system authentication The authselect package has been rebased to the latest upstream version 1.2.2 . Downgrading authselect is not supported and breaks system authentication for all users, including root . If you downgraded the authselect package to 1.2.1 or earlier, perform the following steps to work around this problem: At the GRUB boot screen, select Red Hat Enterprise Linux with the version of the kernel that you want to boot and press e to edit the entry. Type single as a separate word at the end of the line that starts with linux and press Ctrl+X to start the boot process. Upon booting in single-user mode, enter the root password. Restore authselect configuration using the following command: Bugzilla:1892761 IdM to AD cross-realm TGS requests fail The Privilege Attribute Certificate (PAC) information in IdM Kerberos tickets is now signed with AES SHA-2 HMAC encryption, which is not supported by Active Directory (AD). Consequently, IdM to AD cross-realm TGS requests, that is, two-way trust setups, are failing with the following error: Jira:RHEL-4910 Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. Jira:RHELPLAN-155168 [1] pki-core-debuginfo update from RHEL 8.6 to RHEL 8.7 or later fails Updating the pki-core-debuginfo package from RHEL 8.6 to RHEL 8.7 or later fails. To work around this problem, run the following commands: yum remove pki-core-debuginfo yum update -y yum install pki-core-debuginfo yum install idm-pki-symkey-debuginfo idm-pki-tools-debuginfo Jira:RHEL-13125 [1] Migrated IdM users might be unable to log in due to mismatching domain SIDs If you have used the ipa migrate-ds script to migrate users from one IdM deployment to another, those users might have problems using IdM services because their previously existing Security Identifiers (SIDs) do not have the domain SID of the current IdM environment. For example, those users can retrieve a Kerberos ticket with the kinit utility, but they cannot log in. To work around this problem, see the following Knowledgebase article: Migrated IdM users unable to log in due to mismatching domain SIDs . Jira:RHELPLAN-109613 [1] IdM in FIPS mode does not support using the NTLMSSP protocol to establish a two-way cross-forest trust Establishing a two-way cross-forest trust between Active Directory (AD) and Identity Management (IdM) with FIPS mode enabled fails because the New Technology LAN Manager Security Support Provider (NTLMSSP) authentication is not FIPS-compliant. IdM in FIPS mode does not accept the RC4 NTLM hash that the AD domain controller uses when attempting to authenticate. Jira:RHEL-4898 IdM Vault encryption and decryption fails in FIPS mode The OpenSSL RSA-PKCS1v15 padding encryption is blocked if FIPS mode is enabled. Consequently, Identity Management (IdM) Vaults fail to work correctly as IdM is currently using the PKCS1v15 padding for wrapping the session key with the transport certificate. Jira:RHEL-12153 [1] Incorrect warning when setting expiration dates for a Kerberos principal If you set a password expiration date for a Kerberos principal, the current timestamp is compared to the expiration timestamp using a 32-bit signed integer variable. If the expiration date is more than 68 years in the future, it causes an integer variable overflow resulting in the following warning message being displayed: You can ignore this message, the password will expire correctly at the configured date and time. Bugzilla:2125318 SSSD retrieves incomplete list of members if the group size exceeds 1500 members During the integration of SSSD with Active Directory, SSSD retrieves incomplete group member lists when the group size exceeds 1500 members. This issue occurs because Active Directory's MaxValRange policy, which restricts the number of members retrievable in a single query, is set to 1500 by default. To work around this problem, change the MaxValRange setting in Active Directory to accommodate larger group sizes. Jira:RHELDOCS-19603 [1] 11.12. Desktop Disabling flatpak repositories from Software Repositories is not possible Currently, it is not possible to disable or remove flatpak repositories in the Software Repositories tool in the GNOME Software utility. Bugzilla:1668760 Generation 2 RHEL 8 virtual machines sometimes fail to boot on Hyper-V Server 2016 hosts When using RHEL 8 as the guest operating system on a virtual machine (VM) running on a Microsoft Hyper-V Server 2016 host, the VM in some cases fails to boot and returns to the GRUB boot menu. In addition, the following error is logged in the Hyper-V event log: This error occurs due to a UEFI firmware bug on the Hyper-V host. To work around this problem, use Hyper-V Server 2019 or later as the host. Bugzilla:1583445 [1] Drag-and-drop does not work between desktop and applications Due to a bug in the gnome-shell-extensions package, the drag-and-drop functionality does not currently work between desktop and applications. Support for this feature will be added back in a future release. Bugzilla:1717947 WebKitGTK fails to display web pages on IBM Z The WebKitGTK web browser engine fails when trying to display web pages on the IBM Z architecture. The web page remains blank and the WebKitGTK process terminates unexpectedly. As a consequence, you cannot use certain features of applications that use WebKitGTK to display web pages, such as the following: The Evolution mail client The GNOME Online Accounts settings The GNOME Help application Jira:RHEL-4158 11.13. Graphics infrastructures The radeon driver fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon falls over, which causes the rest of the kdump service to fail. To work around this problem, disable radeon in kdump by adding the following line to the /etc/kdump.conf file: Restart the system and kdump . After starting kdump , the force_rebuild 1 line might be removed from the configuration file. Note that in this scenario, no graphics is available during the dump process, but kdump works correctly. Bugzilla:1694705 [1] Multiple HDR displays on a single MST topology may not power on On systems using NVIDIA Turing GPUs with the nouveau driver, using a DisplayPort hub (such as a laptop dock) with multiple monitors which support HDR plugged into it may result in failure to turn on. This is due to the system erroneously thinking there is not enough bandwidth on the hub to support all of the displays. Bugzilla:1812577 [1] GUI in ESXi might crash due to low video memory The graphical user interface (GUI) on RHEL virtual machines (VMs) in the VMware ESXi 7.0.1 hypervisor with vCenter Server 7.0.1 requires a certain amount of video memory. If you connect multiple consoles or high-resolution monitors to the VM, the GUI requires at least 16 MB of video memory. If you start the GUI with less video memory, the GUI might terminate unexpectedly. To work around the problem, configure the hypervisor to assign at least 16 MB of video memory to the VM. As a result, the GUI on the VM no longer crashes. If you encounter this issue, Red Hat recommends that you report it to VMware. See also the following VMware article: VMs with high resolution VM console may experience a crash on ESXi 7.0.1 (83194) . Bugzilla:1910358 [1] VNC Viewer displays wrong colors with the 16-bit color depth on IBM Z The VNC Viewer application displays wrong colors when you connect to a VNC session on an IBM Z server with the 16-bit color depth. To work around the problem, set the 24-bit color depth on the VNC server. With the Xvnc server, replace the -depth 16 option with -depth 24 in the Xvnc configuration. As a result, VNC clients display the correct colors but use more network bandwidth with the server. Bugzilla:1886147 Unable to run graphical applications using sudo command When trying to run graphical applications as a user with elevated privileges, the application fails to open with an error message. The failure happens because Xwayland is restricted by the Xauthority file to use regular user credentials for authentication. To work around this problem, use the sudo -E command to run graphical applications as a root user. Bugzilla:1673073 Hardware acceleration is not supported on ARM Built-in graphics drivers do not support hardware acceleration or the Vulkan API on the 64-bit ARM architecture. To enable hardware acceleration or Vulkan on ARM, install the proprietary Nvidia driver. Jira:RHELPLAN-57914 [1] 11.14. Red Hat Enterprise Linux system roles Using the RHEL system role with Ansible 2.9 can display a warning about using dnf with the command module Since RHEL 8.8, the RHEL system roles no longer use the warn parameter in with the dnf module because this parameter was removed in Ansible Core 2.14. However, if you use the latest rhel-system-roles package still with Ansible 2.9 and a role installs a package, one of the following warnings can be displayed: If you want to hide these warnings, add the command_warnings = False setting to the [Defaults] section of the ansible.cfg file. However, note that this setting disables all warnings in Ansible. Jira:RHELDOCS-17954 Unable to manage localhost by using the localhost hostname in the playbook or inventory With the inclusion of the ansible-core 2.13 package in RHEL, if you are running Ansible on the same host you manage your nodes, you cannot do it by using the localhost hostname in your playbook or inventory. This happens because ansible-core 2.13 uses the python38 module, and many of the libraries are missing, for example, blivet for the storage role, gobject for the network role. To workaround this problem, if you are already using the localhost hostname in your playbook or inventory, you can add a connection, by using ansible_connection=local , or by creating an inventory file that lists localhost with the ansible_connection=local option. With that, you are able to manage resources on localhost . For more details, see the article RHEL system roles playbooks fail when run on localhost . Bugzilla:2041997 The rhc system role fails on already registered systems when rhc_auth contains activation keys Executing playbook files on already registered systems fails if activation keys are specified for the rhc_auth parameter. To workaround this issue, do not specify activation keys when executing the playbook file on the already registered system. Bugzilla:2186908 11.15. Virtualization Using a large number of queues might cause Windows virtual machines to fail Windows virtual machines (VMs) might fail when the virtual Trusted Platform Module (vTPM) device is enabled and the multi-queue virtio-net feature is configured to use more than 250 queues. This problem is caused by a limitation in the vTPM device. The vTPM device has a hardcoded limit on the maximum number of opened file descriptors. Since multiple file descriptors are opened for every new queue, the internal vTPM limit can be exceeded, causing the VM to fail. To work around this problem, choose one of the following two options: Keep the vTPM device enabled, but use less than 250 queues. Disable the vTPM device to use more than 250 queues. Jira:RHEL-13336 [1] The Milan VM CPU type is sometimes not available on AMD Milan systems On certain AMD Milan systems, the Enhanced REP MOVSB ( erms ) and Fast Short REP MOVSB ( fsrm ) feature flags are disabled in the BIOS by default. Consequently, the Milan CPU type might not be available on these systems. In addition, VM live migration between Milan hosts with different feature flag settings might fail. To work around these problems, manually turn on erms and fsrm in the BIOS of your host. Bugzilla:2077770 [1] SMT CPU topology is not detected by VMs when using host passthrough mode on AMD EPYC When a virtual machine (VM) boots with the CPU host passthrough mode on an AMD EPYC host, the TOPOEXT CPU feature flag is not present. Consequently, the VM is not able to detect a virtual CPU topology with multiple threads per core. To work around this problem, boot the VM with the EPYC CPU model instead of host passthrough. Bugzilla:1740002 Attaching LUN devices to virtual machines using virtio-blk does not work The q35 machine type does not support transitional virtio 1.0 devices, and RHEL 8 therefore lacks support for features that were deprecated in virtio 1.0. In particular, it is not possible on a RHEL 8 host to send SCSI commands from virtio-blk devices. As a consequence, attaching a physical disk as a LUN device to a virtual machine fails when using the virtio-blk controller. Note that physical disks can still be passed through to the guest operating system, but they should be configured with the device='disk' option rather than device='lun' . Bugzilla:1777138 [1] Virtual machines sometimes fail to start when using many virtio-blk disks Adding a large number of virtio-blk devices to a virtual machine (VM) may exhaust the number of interrupt vectors available in the platform. If this occurs, the VM's guest OS fails to boot, and displays a dracut-initqueue[392]: Warning: Could not boot error. Bugzilla:1719687 Virtual machines with iommu_platform=on fail to start on IBM POWER RHEL 8 currently does not support the iommu_platform=on parameter for virtual machines (VMs) on IBM POWER system. As a consequence, starting a VM with this parameter on IBM POWER hardware results in the VM becoming unresponsive during the boot process. Bugzilla:1910848 IBM POWER hosts now work correctly when using the ibmvfc driver When running RHEL 8 on a PowerVM logical partition (LPAR), a variety of errors could previously occur due to problems with the ibmvfc driver. As a consequence, a kernel panic triggered on the host under certain circumstances, such as: Using the Live Partition Mobility (LPM) feature Resetting a host adapter Using SCSI error handling (SCSI EH) functions With this update, the handling of ibmvfc has been fixed, and the described kernel panics no longer occur. Bugzilla:1961722 [1] Using perf kvm record on IBM POWER Systems can cause the VM to crash When using a RHEL 8 host on the little-endian variant of IBM POWER hardware, using the perf kvm record command to collect trace event samples for a KVM virtual machine (VM) in some cases results in the VM becoming unresponsive. This situation occurs when: The perf utility is used by an unprivileged user, and the -p option is used to identify the VM - for example perf kvm record -e trace_cycles -p 12345 . The VM was started using the virsh shell. To work around this problem, use the perf kvm utility with the -i option to monitor VMs that were created using the virsh shell. For example: Note that when using the -i option, child tasks do not inherit counters, and threads will therefore not be monitored. Bugzilla:1924016 [1] Windows Server 2016 virtual machines with Hyper-V enabled fail to boot when using certain CPU models Currently, it is not possible to boot a virtual machine (VM) that uses Windows Server 2016 as the guest operating system, has the Hyper-V role enabled, and uses one of the following CPU models: EPYC-IBPB EPYC To work around this problem, use the EPYC-v3 CPU model, or manually enable the xsaves CPU flag for the VM. Bugzilla:1942888 [1] Migrating a POWER9 guest from a RHEL 7-ALT host to RHEL 8 fails Currently, migrating a POWER9 virtual machine from a RHEL 7-ALT host system to RHEL 8 becomes unresponsive with a Migration status: active status. To work around this problem, disable Transparent Huge Pages (THP) on the RHEL 7-ALT host, which enables the migration to complete successfully. Bugzilla:1741436 [1] Using virt-customize sometimes causes guestfs-firstboot to fail After modifying a virtual machine (VM) disk image using the virt-customize utility, the guestfs-firstboot service in some cases fails due to incorrect SELinux permissions. This causes a variety of problems during VM startup, such as failing user creation or system registration. To avoid this problem, use the virt-customize command with the --selinux-relabel option. Bugzilla:1554735 Deleting a forward interface from a macvtap virtual network resets all connection counts of this network Currently, deleting a forward interface from a macvtap virtual network with multiple forward interfaces also resets the connection status of the other forward interfaces of the network. As a consequence, the connection information in the live network XML is incorrect. Note, however, that this does not affect the functionality of the virtual network. To work around the issue, restart the libvirtd service on your host. Bugzilla:1332758 Virtual machines with SLOF fail to boot in netcat interfaces When using a netcat ( nc ) interface to access the console of a virtual machine (VM) that is currently waiting at the Slimline Open Firmware (SLOF) prompt, the user input is ignored and VM stays unresponsive. To work around this problem, use the nc -C option when connecting to the VM, or use a telnet interface instead. Bugzilla:1974622 [1] Attaching mediated devices to virtual machines in virt-manager in some cases fails The virt-manager application is currently able to detect mediated devices, but cannot recognize whether the device is active. As a consequence, attempting to attach an inactive mediated device to a running virtual machine (VM) using virt-manager fails. Similarly, attempting to create a new VM that uses an inactive mediated device fails with a device not found error. To work around this issue, use the virsh nodedev-start or mdevctl start commands to activate the mediated device before using it in virt-manager . Bugzilla:2026985 RHEL 9 virtual machines fail to boot in POWER8 compatibility mode Currently, booting a virtual machine (VM) that runs RHEL 9 as its guest operating system fails if the VM also uses CPU configuration similar to the following: To work around this problem, do not use POWER8 compatibility mode in RHEL 9 VMs. In addition, note that running RHEL 9 VMs is not possible on POWER8 hosts. Bugzilla:2035158 SUID and SGID are not cleared automatically on virtiofs When you run the virtiofsd service with the killpriv_v2 feature, your system may not automatically clear the SUID and SGID permissions after performing some file-system operations. Consequently, not clearing the permissions might cause a potential security threat. To work around this issue, disable the killpriv_v2 feature by entering the following command: Bugzilla:1966475 [1] Restarting the OVS service on a host might block network connectivity on its running VMs When the Open vSwitch (OVS) service restarts or crashes on a host, virtual machines (VMs) that are running on this host cannot recover the state of the networking device. As a consequence, VMs might be completely unable to receive packets. This problem only affects systems that use the packed virtqueue format in their virtio networking stack. To work around this problem, use the packed=off parameter in the virtio networking device definition to disable packed virtqueue. With packed virtqueue disabled, the state of the networking device can, in some situations, be recovered from RAM. Bugzilla:1792683 NFS failure during VM migration causes migration failure and source VM coredump Currently, if the NFS service or server is shut down during virtual machine (VM) migration, the source VM's QEMU is unable to reconnect to the NFS server when it starts running again. As a result, the migration fails and a coredump is initiated on the source VM. Currently, there is no workaround available. Bugzilla:2177957 nodedev-dumpxml does not list attributes correctly for certain mediated devices Currently, the nodedev-dumpxml does not list attributes correctly for mediated devices that were created using the nodedev-create command. To work around this problem, use the nodedev-define and nodedev-start commands instead. Bugzilla:2143160 Starting a VM with an NVIDIA A16 GPU sometimes causes the host GPU to stop working Currently, if you start a VM that uses an NVIDIA A16 GPU passthrough device, the NVIDIA A16 GPU physical device on the host system in some cases stops working. To work around the problem, reboot the hypervisor and set the reset_method for the GPU device to bus : For details, see the Red Hat Knowledgebase . Jira:RHEL-2451 [1] 11.16. RHEL in cloud environments Setting static IP in a RHEL virtual machine on a VMware host does not work Currently, when using RHEL as a guest operating system of a virtual machine (VM) on a VMware host, the DatasourceOVF function does not work correctly. As a consequence, if you use the cloud-init utility to set the VM's network to static IP and then reboot the VM, the VM's network will be changed to DHCP. To work around this issue, see the VMware Knowledge Base . Jira:RHEL-12122 kdump sometimes does not start on Azure and Hyper-V On RHEL 8 guest operating systems hosted on the Microsoft Azure or Hyper-V hypervisors, starting the kdump kernel in some cases fails when post-exec notifiers are enabled. To work around this problem, disable crash kexec post notifiers: Bugzilla:1865745 [1] The SCSI host address sometimes changes when booting a Hyper-V VM with multiple guest disks Currently, when booting a RHEL 8 virtual machine (VM) on the Hyper-V hypervisor, the host portion of the Host, Bus, Target, Lun (HBTL) SCSI address in some cases changes. As a consequence, automated tasks set up with the HBTL SCSI identification or device node in the VM do not work consistently. This occurs if the VM has more than one disk or if the disks have different sizes. To work around the problem, modify your kickstart files, using one of the following methods: Method 1: Use persistent identifiers for SCSI devices. You can use for example the following powershell script to determine the specific device identifiers: You can use this script on the hyper-v host, for example as follows: Afterwards, the disk values can be used in the kickstart file, for example as follows: As these values are specific for each virtual disk, the configuration needs to be done for each VM instance. It may, therefore, be useful to use the %include syntax to place the disk information into a separate file. Method 2: Set up device selection by size. A kickstart file that configures disk selection based on size must include lines similar to the following: Bugzilla:1906870 [1] RHEL instances on Azure fail to boot if provisioned by cloud-init and configured with an NFSv3 mount entry Currently, booting a RHEL virtual machine (VM) on the Microsoft Azure cloud platform fails if the VM was provisioned by the cloud-init tool and the guest operating system of the VM has an NFSv3 mount entry in the /etc/fstab file. Bugzilla:2081114 [1] 11.17. Supportability The getattachment command fails to download multiple attachments at once The redhat-support-tool command offers the getattachment subcommand for downloading attachments. However, getattachment is currently only able to download a single attachment and fails to download multiple attachments. As a workaround, you can download multiple attachments one by one by passing the case number and UUID for each attachment in the getattachment subcommand. Bugzilla:2064575 redhat-support-tool does not work with the FUTURE crypto policy Because a cryptographic key used by a certificate on the Customer Portal API does not meet the requirements by the FUTURE system-wide cryptographic policy, the redhat-support-tool utility does not work with this policy level at the moment. To work around this problem, use the DEFAULT crypto policy while connecting to the Customer Portal API. Jira:RHEL-2345 Timeout when running sos report on IBM Power Systems, Little Endian When running the sos report command on IBM Power Systems, Little Endian with hundreds or thousands of CPUs, the processor plugin reaches its default timeout of 300 seconds when collecting huge content of the /sys/devices/system/cpu directory. As a workaround, increase the plugin's timeout accordingly: For one-time setting, run: For a permanent change, edit the [plugin_options] section of the /etc/sos/sos.conf file: The example value is set to 1800. The particular timeout value highly depends on a specific system. To set the plugin's timeout appropriately, you can first estimate the time needed to collect the one plugin with no timeout by running the following command: Bugzilla:2011413 [1] 11.18. Containers Running systemd within an older container image does not work Running systemd within an older container image, for example, centos:7 , does not work: To work around this problem, use the following commands: Jira:RHELPLAN-96940 [1] | [
"%pre wipefs -a /dev/sda %end",
"The command 'mount --bind /mnt/sysimage/data /mnt/sysroot/data' exited with the code 32.",
"Warning: /boot//.vmlinuz-<kernel version>.x86_64.hmac does not exist FATAL: FIPS integrity test failed Refusing to continue",
"NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+DHE-RSA:+AES-256-GCM:+SIGN-RSA-SHA384:+COMP-ALL:+GROUP-ALL",
"NONE:+VERS-ALL:-VERS-TLS1.3:+MAC-ALL:+ECDHE-RSA:+AES-128-CBC:+SIGN-RSA-SHA1:+COMP-ALL:+GROUP-ALL",
"package xorg-x11-server-common has been added to the list of excluded packages, but it can't be removed from the current software selection without breaking the installation.",
"update-crypto-policies --set DEFAULT:NO-CAMELLIA",
"app pkcs15-init { framework pkcs15 { use_file_caching = false; } }",
"yum module enable libselinux-python yum install libselinux-python",
"yum module install libselinux-python:2.8/common",
"There was an unexpected problem with the supplied content.",
"xccdf-path = /usr/share/xml/scap/sc_tailoring/ds-combined.xml tailoring-path = /usr/share/xml/scap/sc_tailoring/tailoring-xccdf.xml",
"ipmitool -I lanplus -H myserver.example.com -P mypass -C 3 chassis power status",
"wipefs -a /dev/sda[1-9] /dev/sda",
"[Match] Architecture=s390x KernelCommandLine=!net.naming-scheme=rhel-8.7 [Link] NamePolicy=kernel database slot path AlternativeNamesPolicy=database slot path MACAddressPolicy=persistent",
"IPv6_rpfilter=no",
"[ 2.817152] acpi PNP0A08:00: [Firmware Bug]: ECAM area [mem 0x30000000-0x31ffffff] not reserved in ACPI namespace [ 2.827911] acpi PNP0A08:00: ECAM at [mem 0x30000000-0x31ffffff] for [bus 00-1f]",
"03:00.0 Non-Volatile memory controller: Sandisk Corp WD Black 2018/PC SN720 NVMe SSD (prog-if 02 [NVM Express]) Capabilities: [900 v1] L1 PM Substates L1SubCap: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2+ ASPM_L1.1- L1_PM_Substates+ PortCommonModeRestoreTime=255us PortTPowerOnTime=10us L1SubCtl1: PCI-PM_L1.2- PCI-PM_L1.1- ASPM_L1.2- ASPM_L1.1- T_CommonMode=0us LTR1.2_Threshold=0ns L1SubCtl2: T_PwrOn=10us",
"-mca btl openib -mca pml ucx -x UCX_NET_DEVICES=mlx5_ib0",
"-mca pml_ucx_priority 5",
"systemctl restart kdump.service",
"KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\"",
"KDUMP_COMMANDLINE_APPEND=\"irqpoll nr_cpus=1 reset_devices cgroup_disable=memory udev.children-max=2 panic=10 swiotlb=noforce novmcoredd\"",
"systemctl restart kdump",
"grubby --update-kernel=ALL --args=\"skew_tick=1\"",
"cat /proc/cmdline",
"sfboot vf-msix-limit=2",
"kernel: iwlwifi 0000:09:00.0: Failed to start RT ucode: -110 kernel: iwlwifi 0000:09:00.0: WRT: Collecting data: ini trigger 13 fired (delay=0ms) kernel: iwlwifi 0000:09:00.0: Failed to run INIT ucode: -110",
"systemctl enable --now blk-availability.service",
"virtualenv -p python3.11 venv3.11 Running virtualenv with interpreter /usr/bin/python3.11 ERROR: Virtual environments created by virtualenv < 20 are not compatible with Python 3.11. ERROR: Use `python3.11 -m venv` instead.",
"systemctl restart smbd",
"authselect select sssd --force",
"Generic error (see e-text) while getting credentials for <service principal>",
"Warning: Your password will expire in less than one hour on [expiration date]",
"The guest operating system reported that it failed with the following error code: 0x1E",
"dracut_args --omit-drivers \"radeon\" force_rebuild 1",
"[WARNING]: Consider using the dnf module rather than running 'dnf'. If you need to use command because dnf is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.",
"[WARNING]: Consider using the yum, dnf or zypper module rather than running 'rpm'. If you need to use command because yum, dnf or zypper is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message.",
"perf kvm record -e trace_imc/trace_cycles/ -p <guest pid> -i",
"<cpu mode=\"host-model\"> <model>power8</model> </cpu>",
"virtiofsd -o no_killpriv_v2",
"echo bus > /sys/bus/pci/devices/<DEVICE-PCI-ADDRESS>/reset_method cat /sys/bus/pci/devices/<DEVICE-PCI-ADDRESS>/reset_method bus",
"echo N > /sys/module/kernel/parameters/crash_kexec_post_notifiers",
"Output what the /dev/disk/by-id/<value> for the specified hyper-v virtual disk. Takes a single parameter which is the virtual disk file. Note: kickstart syntax works with and without the /dev/ prefix. param ( [Parameter(Mandatory=USDtrue)][string]USDvirtualdisk ) USDwhat = Get-VHD -Path USDvirtualdisk USDpart = USDwhat.DiskIdentifier.ToLower().split('-') USDp = USDpart[0] USDs0 = USDp[6] + USDp[7] + USDp[4] + USDp[5] + USDp[2] + USDp[3] + USDp[0] + USDp[1] USDp = USDpart[1] USDs1 = USDp[2] + USDp[3] + USDp[0] + USDp[1] [string]::format(\"/dev/disk/by-id/wwn-0x60022480{0}{1}{2}\", USDs0, USDs1, USDpart[4])",
"PS C:\\Users\\Public\\Documents\\Hyper-V\\Virtual hard disks> .\\by-id.ps1 .\\Testing_8\\disk_3_8.vhdx /dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4 PS C:\\Users\\Public\\Documents\\Hyper-V\\Virtual hard disks> .\\by-id.ps1 .\\Testing_8\\disk_3_9.vhdx /dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2",
"part / --fstype=xfs --grow --asprimary --size=8192 --ondisk=/dev/disk/by-id/wwn-0x600224807270e09717645b1890f8a9a2 part /home --fstype=\"xfs\" --grow --ondisk=/dev/disk/by-id/wwn-0x60022480e00bc367d7fd902e8bf0d3b4",
"Disk partitioning information is supplied in a file to kick start %include /tmp/disks Partition information is created during install using the %pre section %pre --interpreter /bin/bash --log /tmp/ks_pre.log # Dump whole SCSI/IDE disks out sorted from smallest to largest ouputting # just the name disks=(`lsblk -n -o NAME -l -b -x SIZE -d -I 8,3`) || exit 1 # We are assuming we have 3 disks which will be used # and we will create some variables to represent d0=USD{disks[0]} d1=USD{disks[1]} d2=USD{disks[2]} echo \"part /home --fstype=\"xfs\" --ondisk=USDd2 --grow\" >> /tmp/disks echo \"part swap --fstype=\"swap\" --ondisk=USDd0 --size=4096\" >> /tmp/disks echo \"part / --fstype=\"xfs\" --ondisk=USDd1 --grow\" >> /tmp/disks echo \"part /boot --fstype=\"xfs\" --ondisk=USDd1 --size=1024\" >> /tmp/disks %end",
"sos report -k processor.timeout=1800",
"Specify any plugin options and their values here. These options take the form plugin_name.option_name = value #rpm.rpmva = off processor.timeout = 1800",
"time sos report -o processor -k processor.timeout=0 --batch --build",
"podman run --rm -ti centos:7 /usr/lib/systemd/systemd Storing signatures Failed to mount cgroup at /sys/fs/cgroup/systemd: Operation not permitted [!!!!!!] Failed to mount API filesystems, freezing.",
"mkdir /sys/fs/cgroup/systemd mount none -t cgroup -o none,name=systemd /sys/fs/cgroup/systemd podman run --runtime /usr/bin/crun --annotation=run.oci.systemd.force_cgroup_v1=/sys/fs/cgroup --rm -ti centos:7 /usr/lib/systemd/systemd"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.9_release_notes/known-issues |
22.2. Enabling and Disabling Write Barriers | 22.2. Enabling and Disabling Write Barriers To mitigate the risk of data corruption during power loss, some storage devices use battery-backed write caches. Generally, high-end arrays and some hardware controllers use battery-backed write caches. However, because the cache's volatility is not visible to the kernel, Red Hat Enterprise Linux 7 enables write barriers by default on all supported journaling file systems. Note Write caches are designed to increase I/O performance. However, enabling write barriers means constantly flushing these caches, which can significantly reduce performance. For devices with non-volatile, battery-backed write caches and those with write-caching disabled, you can safely disable write barriers at mount time using the -o nobarrier option for mount . However, some devices do not support write barriers; such devices log an error message to /var/log/messages . For more information, see Table 22.1, "Write Barrier Error Messages per File System" . Table 22.1. Write Barrier Error Messages per File System File System Error Message ext3/ext4 JBD: barrier-based sync failed on device - disabling barriers XFS Filesystem device - Disabling barriers, trial barrier write failed btrfs btrfs: disabling barriers on dev device | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/writebarrieronoff |
function::ustack | function::ustack Name function::ustack - Return address at given depth of user stack backtrace Synopsis Arguments n number of levels to descend in the stack. Description Performs a simple (user space) backtrace, and returns the element at the specified position. The results of the backtrace itself are cached, so that the backtrace computation is performed at most once no matter how many times ustack is called, or in what order. | [
"ustack:long(n:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ustack |
17.3.4. Other Partitioning Problems for IBM Power Systems Users | 17.3.4. Other Partitioning Problems for IBM Power Systems Users If you create partitions manually, but cannot move to the screen, you probably have not created all the partitions necessary for installation to proceed. You must have the following partitions as a bare minimum: A / (root) partition A <swap> partition of type swap A PReP Boot partition. A /boot/ partition. Refer to Section 16.17.5, "Recommended Partitioning Scheme" for more information. Note When defining a partition's type as swap, do not assign it a mount point. Anaconda automatically assigns the mount point for you. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-trouble-completeparts-ppc |
27.5. Enabling Console Access for Other Applications | 27.5. Enabling Console Access for Other Applications To make other applications accessible to console users, a bit more work is required. First of all, console access only works for applications which reside in /sbin/ or /usr/sbin/ , so the application that you wish to run must be there. After verifying that, do the following steps: Create a link from the name of your application, such as our sample foo program, to the /usr/bin/consolehelper application: Create the file /etc/security/console.apps/ foo : Create a PAM configuration file for the foo service in /etc/pam.d/ . An easy way to do this is to start with a copy of the halt service's PAM configuration file, and then modify the file if you want to change the behavior: Now, when /usr/bin/ foo is executed, consolehelper is called, which authenticates the user with the help of /usr/sbin/userhelper . To authenticate the user, consolehelper asks for the user's password if /etc/pam.d/ foo is a copy of /etc/pam.d/halt (otherwise, it does precisely what is specified in /etc/pam.d/ foo ) and then runs /usr/sbin/ foo with root permissions. In the PAM configuration file, an application can be configured to use the pam_timestamp module to remember (or cache) a successful authentication attempt. When an application is started and proper authentication is provided (the root password), a timestamp file is created. By default, a successful authentication is cached for five minutes. During this time, any other application that is configured to use pam_timestamp and run from the same session is automatically authenticated for the user - the user does not have to enter the root password again. This module is included in the pam package. To enable this feature, the PAM configuration file in etc/pam.d/ must include the following lines: The first line that begins with auth should be after any other auth sufficient lines, and the line that begins with session should be after any other session optional lines. If an application configured to use pam_timestamp is successfully authenticated from the Main Menu Button (on the Panel), the icon is displayed in the notification area of the panel if you are running the GNOME or KDE desktop environment. After the authentication expires (the default is five minutes), the icon disappears. The user can select to forget the cached authentication by clicking on the icon and selecting the option to forget authentication. | [
"cd /usr/bin ln -s consolehelper foo",
"touch /etc/security/console.apps/ foo",
"cp /etc/pam.d/halt /etc/pam.d/foo",
"auth sufficient /lib/security/pam_timestamp.so session optional /lib/security/pam_timestamp.so"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Console_Access-Enabling_Console_Access_for_Other_Applications |
Chapter 1. Migration Toolkit for Virtualization 2.6 | Chapter 1. Migration Toolkit for Virtualization 2.6 You can use the Migration Toolkit for Virtualization (MTV) to migrate virtual machines from the following source providers to OpenShift Virtualization destination providers: VMware vSphere Red Hat Virtualization (RHV) OpenStack Open Virtual Appliances (OVAs) that were created by VMware vSphere Remote OpenShift Virtualization clusters The release notes describe technical changes, new features and enhancements, known issues, and resolved issues. 1.1. Technical changes This release has the following technical changes: Simplified the creation of vSphere providers In earlier releases of MTV, users had to specify a fingerprint when creating a vSphere provider. This required users to retrieve the fingerprint from the server that vCenter runs on. MTV no longer requires this fingerprint as an input, but rather computes it from the specified certificate in the case of a secure connection or automatically retrieves it from the server that runs vCenter/ESXi in the case of an insecure connection. Redesigned the migration plan creation dialog The user interface console has improved the process of creating a migration plan. The new migration plan dialog enables faster creation of migration plans. It includes only the minimal settings that are required, while you can confirgure advanced settings separately. The new dialog also provides defaults for network and storage mappings, where applicable. The new dialog can also be invoked from the the Provider > Virtual Machines tab, after selecting the virtual machines to migrate. It also better aligns with the user experience in the OCP console. virtual machine preferences have replaced OpenShift templates The virtual machine preferences have replaced OpenShift templates. MTV currently falls back to using OpenShift templates when a relevant preference is not available. Custom mappings of guest operating system type to virtual machine preference can be configured using config maps. This is in order to use custom virtual machine preferences, or to support more guest operating system types. Full support for migration from OVA Migration from OVA moves from being a Technical Preview and is now a fully supported feature. The VM is posted with its desired Running state MTV creates the VM with its desired Running state on the target provider, instead of creating the VM and then running it as an additional operation. (MTV-794) The must-gather logs can now be loaded only by using the CLI The MTV web console can no longer download logs. With this update, you must download must-gather logs by using CLI commands. For more information, see Must Gather Operator . MTV no longer runs pvc-init pods when migrating from vSphere MTV no longer runs pvc-init pods during cold migration from a vSphere provider to the OpenShift cluster that MTV is deployed on. However, in other flows where data volumes are used, they are set with the cdi.kubevirt.io/storage.bind.immediate.requested annotation, and CDI runs first-consume pods for storage classes with volume binding mode WaitForFirstConsumer . 1.2. New features and enhancements This section provides features and enhancements introduced in Migration Toolkit for Virtualization 2.6. 1.2.1. New features and enhancements 2.6.3 Support for migrating LUKS-encrypted devices in migrations from vSphere You can now perform cold migrations from a vSphere provider of VMs whose virtual disks are encrypted by Linux Unified Key Setup (LUKS). (MTV-831) Specifying the primary disk when migrating from vSphere You can now specify the primary disk when you migrate VMs from vSphere with more than one bootable disk. This avoids MTV automatically attempting to convert the first bootable disk that it detects while it examines all the disks of a virtual machine. This feature is needed because the first bootable disk is not necessarily the disk that the VM is expected to boot from in OpenShift Virtualization. (MTV-1079) Links to remote provider UIs You can now remotely access the UI of a remote cluster when you create a source provider. For example, if the provider is a remote Red Hat Virtualization RHV cluster, MTV adds a link to the remote RHV web console when you define the provider. This feature makes it easier for you to manage and debug a migration from remote clusters. (MTV-1054) 1.2.2. New features and enhancements 2.6.0 Migration from vSphere over a secure connection You can now specify a CA certificate that can be used to authenticate the server that runs vCenter or ESXi, depending on the specified SDK endpoint of the vSphere provider. (MTV-530) Migration to or from a remote OpenShift over a secure connection You can now specify a CA certificate that can be used to authenticate the API server of a remote OpenShift cluster. (MTV-728) Migration from an ESXi server without going through vCenter MTV enables the configuration of vSphere providers with the SDK of ESXi. You need to select ESXi as the Endpoint type of the vSphere provider and specify the URL of the SDK of the ESXi server. (MTV-514) Migration of image-based VMs from OpenStack MTV supports the migration of VMs that were created from images in OpenStack. (MTV-644) Migration of VMs with Fibre Channel LUNs from RHV MTV supports migrations of VMs that are set with Fibre Channel (FC) LUNs from RHV. As with other LUN disks, you need to ensure the OpenShift nodes have access to the FC LUNs. During the migrations, the FC LUNs are detached from the source VMs in RHV and attached to the migrated VMs in OpenShift. (MTV-659) Preserve CPU types of VMs that are migrated from RHV MTV sets the CPU type of migrated VMs in OpenShift with their custom CPU type in RHV. In addition, a new option was added to migration plans that are set with RHV as a source provider to preserve the original CPU types of source VMs. When this option is selected, MTV identifies the CPU type based on the cluster configuration and sets this CPU type for the migrated VMs, for which the source VMs are not set with a custom CPU. (MTV-547) Validation for RHEL 6 guest operating system is now available when migrating VMs with RHEL 6 guest operating system Red Hat Enterprise Linux (RHEL) 9 does not support RHEL 6 as a guest operating system. Therefore, RHEL 6 is not supported in OpenShift Virtualization. With this update, a validation of RHEL 6 guest operating system was added to OpenShift Virtualization. (MTV413) Automatic retrieval of CA certificates for the provider's URL in the console The ability to retrieve CA certificates, which was available in versions, has been restored. The vSphere Verify certificate option is in the add-provider dialog. This option was removed in the transition to the Red Hat OpenShift console and has now been added to the console. This functionality is also available for RHV, OpenStack, and OpenShift providers now. (MTV-737) Validation of a specified VDDK image MTV validates the availability of a VDDK image that is specified for a vSphere provider on the target OpenShift name as part of the validation of a migration plan. MTV also checks whether the libvixDiskLib.so symbolic link (symlink) exists within the image. If the validation fails, the migration plan cannot be started. (MTV-618) Add a warning and partial support for TPM MTV presents a warning when attempting to migrate a VM that is set with a TPM device from RHV or vSphere. The migrated VM in OpenShift would be set with a TPM device but without the content of the TPM device on the source environment. (MTV-378) Plans that failed to migrate VMs can now be edited With this update, you can edit plans that have failed to migrate any VMs. Some plans fail or are canceled because of incorrect network and storage mappings. You can now edit these plans until they succeed. (MTV-779) Validation rules are now available for OVA The validation service includes default validation rules for virtual machines from the Open Virtual Appliance (OVA). (MTV-669) 1.3. Resolved issues This release has the following resolved issues: 1.3.1. Resolved issues 2.6.7 Incorrect handling of quotes in ifcfg files In earlier releases of MTV, there was an issue with the incorrect handling of single and double quotes in interface configuration (ifcfg) files, which control the software interfaces for individual network devices. This issue has been resolved in MTV 2.6.7, in order to cover additional IP configurations on Red Hat Enterprise Linux, CentOS, Rocky Linux and similar distributions. (MTV-1439) Failure to preserve netplan based network configuration In earlier releases of MTV, there was an issue with the preservation of netplan-based network configurations. This issue has been resolved in MTV 2.6.7, so that static IP configurations are preserved if netplan (netplan.io) is used by using the netplan configuration files to generate udev rules for known mac-address and ifname tuples. (MTV-1440) Error messages are written into udev .rules files In earlier releases of MTV, there was an issue with the accidental leakage of error messages into udev .rules files. This issue has been resolved in MTV 2.6.7, with a static IP persistence script added to the udev rule file. (MTV-1441) 1.3.2. Resolved issues 2.6.6 Runtime error: invalid memory address or nil pointer dereference In earlier releases of MTV, there was a runtime error of invalid memory address or nil pointer dereference caused by a pointer that was nil, and there was an attempt to access the value that it points to. This issue has been resolved in MTV 2.6.6. (MTV-1353) All Plan and Migration pods scheduled to same node causing the node to crash In earlier releases of MTV, the scheduler could place all migration pods on a single node. When this happened, the node ran out of the resources. This issue has been resolved in MTV 2.6.6. (MTV-1354) Empty bearer token is sufficient for authentication In earlier releases of MTV, a vulnerability was found in the Forklift Controller. There is no verification against the authorization header, except to ensure it uses bearer authentication. Without an authorization header and a bearer token, a 401 error occurs. The presence of a token value provides a 200 response with the requested information. This issue has been resolved in MTV 2.6.6. For more details, see (CVE-2024-8509) . 1.3.3. Resolved issues 2.6.5 VMware Linux interface name changes during migration In earlier releases of MTV, during the migration of Rocky Linux 8, CentOS 7.2 and later, and Ubuntu 22 virtual machines (VM) from VMware to Red Hat OpenShift (OCP), the name of the network interfaces is modified, and the static IP configuration for the VM is no longer functional. This issue has been resolved for static IPs in Rocky Linux 8, Centos 7.2 and later, Ubuntu 22 in MTV 2.6.5. (MTV-595) 1.3.4. Resolved issues 2.6.4 Disks and drives are offline after migrating Windows virtual machines from RHV or VMware to OCP Windows (Windows 2022) VMs configured with multiple disks, which are Online before the migration, are Offline after a successful migration from RHV or VMware to Red Hat OpenShift, using MTV. Only the C:\ primary disk is Online . This issue has been resolved for basic disks in MTV 2.6.4. (MTV-1299) For details of the known issue of dynamic disks being Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd , see (MTV-1344) . Preserve IP option for Windows does not preserve all settings In earlier releases of MTV, while migrating a Windows 2022 Server with a static IP address assigned, and selecting the Preserve static IPs option, after a successful Windows migration, while the node started and the IP address was preserved, the subnet mask, gateway, and DNS servers were not preserved. This resulted in an incomplete migration, and the customer was forced to log in locally from the console to fully configure the network. This issue has been resolved in MTV 2.6.4. (MTV-1286) qemu-guest-agent not being installed at first boot in Windows Server 2022 After a successful Windows 2022 server guest migration using MTV 2.6.1, the qemu-guest-agent is not completely installed. The Windows Scheduled task is being created, however it is being set to run 4 hours in the future instead of the intended 2 minutes in the future. (MTV-1325) 1.3.5. Resolved issues 2.6.3 CVE-2024-24788: golang: net malformed DNS message can cause infinite loop In earlier releases of MTV, there was a flaw was discovered in the stdlib package of the Go programming language, which impacts versions of MTV. This vulnerability primarily threatens web-facing applications and services that rely on Go for DNS queries. This issue has been resolved in MTV 2.6.3. For more details, see (CVE-2024-24788) . Migration scheduling does not take into account that virt-v2v copies disks sequentially (vSphere only) In earlier releases of MTV, there was a problem with the way MTV interpreted the controller_max_vm_inflight setting for vSphere to schedule migrations. This issue has been resolved in MTV 2.6.3. (MTV-1191) Cold migrations fail after changing the ESXi network (vSphere only) In earlier versions of MTV, cold migrations from a vSphere provider with an ESXi SDK endpoint failed if any network was used except for the default network for disk transfers. This issue has been resolved in MTV 2.6.3. (MTV-1180) Warm migrations over an ESXi network are stuck in DiskTransfer state (vSphere only) In earlier versions of MTV, warm migrations over an ESXi network from a vSphere provider with a vCenter SDK endpoint were stuck in DiskTransfer state because MTV was unable to locate image snapshots. This issue has been resolved in MTV 2.6.3. (MTV-1161) Leftover PVCs are in Lost state after cold migrations In earlier versions of MTV, after cold migrations, there were leftover PVCs that had a status of Lost instead of being deleted, even after the migration plan that created them was archived and deleted. Investigation showed that this was because importer pods were retained after copying, by default, rather than in only specific cases. This issue has been resolved in MTV 2.6.3. (MTV-1095) Guest operating system from vSphere might be missing (vSphere only) In earlier versions of MTV, some VMs that were imported from vSphere were not mapped to a template in OpenShift while other VMs, with the same guest operating system, were mapped to the corresponding template. Investigations indicated that this was because vSphere stopped reporting the operating system after not receiving updates from VMware tools for some time. This issue has been resolved in MTV 2.6.3 by taking the value of the operating system from the output of the investigation that virt-v2v performs on the disks. (MTV-1046) 1.3.6. Resolved issues 2.6.2 CVE-2023-45288: Golang net/http, x/net/http2 : unlimited number of CONTINUATION frames can cause a denial-of-service (DoS) attack A flaw was discovered with the implementation of the HTTP/2 protocol in the Go programming language, which impacts versions of MTV. There were insufficient limitations on the number of CONTINUATION frames sent within a single stream. An attacker could potentially exploit this to cause a denial-of-service (DoS) attack. This flaw has been resolved in MTV 2.6.2. For more details, see (CVE-2023-45288) . CVE-2024-24785: mtv-api-container : Golang html/template: errors returned from MarshalJSON methods may break template escaping A flaw was found in the html/template Golang standard library package, which impacts versions of MTV. If errors returned from MarshalJSON methods contain user-controlled data, they may be used to break the contextual auto-escaping behavior of the HTML/template package, allowing subsequent actions to inject unexpected content into the templates. This flaw has been resolved in MTV 2.6.2. For more details, see (CVE-2024-24785) . CVE-2024-24784: mtv-validation-container : Golang net/mail : comments in display names are incorrectly handled A flaw was found in the net/mail Golang standard library package, which impacts versions of MTV. The ParseAddressList function incorrectly handles comments, text in parentheses, and display names. As this is a misalignment with conforming address parsers, it can result in different trust decisions being made by programs using different parsers. This flaw has been resolved in MTV 2.6.2. For more details, see (CVE-2024-24784) . CVE-2024-24783: mtv-api-container : Golang crypto/x509 : Verify panics on certificates with an unknown public key algorithm A flaw was found in the crypto/x509 Golang standard library package, which impacts versions of MTV. Verifying a certificate chain that contains a certificate with an unknown public key algorithm causes Certificate.Verify to panic. This affects all crypto/tls clients and servers that set Config.ClientAuth to VerifyClientCertIfGiven or RequireAndVerifyClientCert . The default behavior is for TLS servers to not verify client certificates. This flaw has been resolved in MTV 2.6.2. For more details, see (CVE-2024-24783) . CVE-2023-45290: mtv-api-container : Golang net/http memory exhaustion in Request.ParseMultipartForm A flaw was found in the net/http Golang standard library package, which impacts versions of MTV. When parsing a multipart form, either explicitly with Request.ParseMultipartForm or implicitly with Request.FormValue , Request.PostFormValue , or Request.FormFile , limits on the total size of the parsed form are not applied to the memory consumed while reading a single form line. This permits a maliciously crafted input containing long lines to cause the allocation of arbitrarily large amounts of memory, potentially leading to memory exhaustion. This flaw has been resolved in MTV 2.6.2. For more details, see (CVE-2023-45290) . ImageConversion does not run when target storage is set with WaitForFirstConsumer (WFFC) In earlier releases of MTV, migration of VMs failed because the migration was stuck in the AllocateDisks phase. As a result of being stuck, the migration did not progress, and PVCs were not bound. The root cause of the issue was that ImageConversion did not run when target storage was set for wait-for-first-consumer . The problem was resolved in MTV 2.6.2. (MTV-1126) forklift-controller panics when importing VMs with direct LUNs In earlier releases of MTV, forklift-controller panicked when a user attempted to import VMs that had direct LUNs. The problem was resolved in MTV 2.6.2. (MTV-1134) 1.3.7. Resolved issues 2.6.1 VMs with multiple disks that are migrated from vSphere and OVA files are not being fully copied In MTV 2.6.0, there was a problem in copying VMs with multiple disks from VMware vSphere and from OVA files. The migrations appeared to succeed but all the disks were transferred to the same PV in the target environment while other disks were empty. In some cases, bootable disks were overridden, so the VM could not boot. In other cases, data from the other disks was missing. The problem was resolved in MTV 2.6.1. (MTV-1067) Migrating VMs from one Red Hat OpenShift cluster to another fails due to a timeout In MTV 2.6.0, migrations from one Red Hat OpenShift cluster to another failed when the time to transfer the disks of a VM exceeded the time to live (TTL) of the Export API in OpenShift, which was set to 2 hours by default. The problem was resolved in MTV 2.6.1 by setting the default TTL of the Export API to 12 hours, which greatly reduces the possibility of an expiration of the Export API. Additionally, you can increase or decrease the TTL setting as needed. (MTV-1052) MTV forklift-controller pod crashes when receiving a disk without a datastore In earlier releases of MTV, if a VM was configured with a disk that was on a datastore that was no longer available in vSphere at the time a migration was attempted, the forklift-controller crashed, rendering MTV unusable. In MTV 2.6.1, MTV presents a critical validation for VMs with such disks, informing users of the problem, and the forklift-controller no longer crashes, although it cannot transfer the disk. (MTV-1029) 1.3.8. Resolved issues 2.6.0 Deleting an OVA provider automatically also deletes the PV In earlier releases of MTV, the PV was not removed when the OVA provider was deleted. This has been resolved in MTV 2.6.0, and the PV is automatically deleted when the OVA provider is deleted. (MTV-848) Fix for data being lost when migrating VMware VMs with snapshots In earlier releases of MTV, when migrating a VM that has a snapshot from VMware, the VM that was created in OpenShift Virtualization contained the data in the snapshot but not the latest data of the VM. This has been resolved in MTV 2.6.0. (MTV-447) Canceling and deleting a failed migration plan does not clean up the populate pods and PVC In earlier releases of MTV, when you canceled and deleted a failed migration plan, and after creating a PVC and spawning the populate pods, the populate pods and PVC were not deleted. You had to delete the pods and PVC manually. This issue has been resolved in MTV 2.6.0. (MTV-678) Red Hat OpenShift to Red Hat OpenShift migrations require the cluster version to be 4.13 or later In earlier releases of MTV, when migrating from Red Hat OpenShift to Red Hat OpenShift, the version of the source provider cluster had to be Red Hat OpenShift version 4.13 or later. This issue has been resolved in MTV 2.6.0, with validation being shown when migrating from versions of OpenShift before 4.13. (MTV-734) Multiple storage domains from RHV were always mapped to a single storage class In earlier releases of MTV, multiple disks from different storage domains were always mapped to a single storage class, regardless of the storage mapping that was configured. This issue has been resolved in MTV 2.6.0. (MTV-1008) Firmware detection by virt-v2v In earlier releases of MTV, a VM that was migrated from an OVA that did not include the firmware type in its OVF configuration was set with UEFI. This was incorrect for VMs that were configured with BIOS. This issue has been resolved in MTV 2.6.0, as MTV now consumes the firmware that is detected by virt-v2v during the conversion of the disks. (MTV-759) Creating a host secret requires validation of the secret before creation of the host In earlier releases of MTV, when configuring a transfer network for vSphere hosts, the console plugin created the Host CR before creating its secret. The secret should be specified first in order to validate it before the Host CR is posted. This issue has been resolved in MTV 2.6.0. (MTV-868) When adding OVA provider a ConnectionTestFailed message appears In earlier releases of MTV, when adding an OVA provider, the error message ConnectionTestFailed instantly appeared, although the provider had been created successfully. This issue has been resolved in MTV 2.6.0. (MTV-671) RHV provider ConnectionTestSucceeded True response from the wrong URL In earlier releases of MTV, the ConnectionTestSucceeded condition was set to True even when the URL was different than the API endpoint for the RHV Manager. This issue has been resolved in MTV 2.6.0. (MTV-740) Migration does not fail when a vSphere Data Center is nested inside a folder In earlier releases of MTV, migrating a VM that is placed in a Data Center that is stored directly under the /vcenter in vSphere succeeded. However, it failed when the Data Center was stored inside a folder. This issue was resolved in MTV 2.6.0. (MTV-796) The OVA inventory watcher detects deleted files The OVA inventory watcher detects files changes, including deleted files. Updates from the ova-provider-server pod are now sent every five minutes to the forklift-controller pod that updates the inventory. (MTV-733) Unclear error message when Forklift fails to build or create a PVC In earlier releases of MTV, the error logs lacked clear information to identify the reason for a failure to create a PV on a destination storage class that does not have a configured storage profile. This issue was resolved in MTV 2.6.0. (MTV-928) Plans stay indefinitely in the CopyDisks phase when there is an outdated ovirtvolumepopulator In earlier releases of MTV, an earlier failed migration could have left an outdated ovirtvolumepopulator . When starting a new plan for the same VM to the same project, the CreateDataVolumes phase did not create populator PVCs when transitioning to CopyDisks , causing the CopyDisks phase to stay indefinitely. This issue was resolved in MTV 2.6.0. (MTV-929) For a complete list of all resolved issues in this release, see the list of Resolved Issues in Jira. 1.4. Known issues This release has the following known issues: Warm migration and remote migration flows are impacted by multiple bugs Warm migration and remote migration flows are impacted by multiple bugs. It is strongly recommended to fall back to cold migration until this issue is resolved. (MTV-1366) Migrating older Linux distributions from VMware to Red Hat OpenShift, the name of the network interfaces changes When migrating older Linux distributions, such as CentOS 7.0 and 7.1, virtual machines (VMs) from VMware to Red Hat OpenShift, the name of the network interfaces changes, and the static IP configuration for the VM no longer functions. This issue is caused by RHEL 7.0 and 7.1 still requiring virtio-transitional . Workaround: Manually update the guest to RHEL 7.2 or update the VM specification post-migration to use transitional. (MTV-1382) Dynamic disks are offline in Windows Server 2022 after migration from vSphere to CNV with ceph-rbd The dynamic disks are Offline in Windows Server 2022 after cold and warm migrations from vSphere to container-native virtualization (CNV) with Ceph RADOS Block Devices (RBD), using the storage class ocs-storagecluster-ceph-rbd . (MTV-1344) Unclear error status message for VM with no operating system The error status message for a VM with no operating system on the Plans page of the web console does not describe the reason for the failure. (BZ#22008846) Migration of virtual machines with encrypted partitions fails during a conversion (vSphere only) vSphere only: Migrations from RHV and OpenStack do not fail, but the encryption key might be missing on the target Red Hat OpenShift cluster. Migration fails during precopy/cutover while performing a snapshot operation on the source VM Warm migration from RHV fails if a snapshot operation is triggered and running on the source VM at the same time as the migration is scheduled. The migration does not wait for the snapshot operation to finish. (MTV-456) Unable to schedule migrated VM with multiple disks to more than one storage class of type hostPath When migrating a VM with multiple disks to more than one storage class of type hostPath , it might happen that a VM cannot be scheduled. Workaround: Use shared storage on the target Red Hat OpenShift cluster. Non-supported guest operating systems in warm migrations Warm migrations and migrations to remote Red Hat OpenShift clusters from vSphere do not support the same guest operating systems that are supported in cold migrations and migrations to the local Red Hat OpenShift cluster. RHEL 8 and RHEL 9 might cause this limitation. See Converting virtual machines from other hypervisors to KVM with virt-v2v in RHEL 7, RHEL 8, and RHEL 9 for the list of supported guest operating systems. VMs from vSphere with RHEL 9 guest operating system can start with network interfaces that are down When migrating VMs that are installed with RHEL 9 as a guest operating system from vSphere, the network interfaces of the VMs could be disabled when they start in OpenShift Virtualization. (MTV-491) Migration of a VM with NVME disks from vSphere fails When migrating a virtual machine (VM) with NVME disks from vSphere, the migration process fails, and the Web Console shows that the Convert image to kubevirt stage is running but did not finish successfully. (MTV-963) Importing image-based VMs can fail Migrating an image-based VM without the virtual_size field can fail on a block mode storage class. (MTV-946) Deleting a migration plan does not remove temporary resources Deleting a migration plan does not remove temporary resources such as importer pods, conversion pods, config maps, secrets, failed VMs, and data volumes. You must archive a migration plan before deleting it to clean up the temporary resources. (BZ#2018974) Migrating VMs with independent persistent disks from VMware to OCP-V fails Migrating VMs with independent persistent disks from VMware to OCP-V fails. (MTV-993) Guest operating system from vSphere might be missing When vSphere does not receive updates about the guest operating system from the VMware tools, it considers the information about the guest operating system to be outdated and ceases to report it. When this occurs, MTV is unaware of the guest operating system of the VM and is unable to associate it with the appropriate virtual machine preference or OpenShift template. (MTV-1046) Failure to migrate an image-based VM from OpenStack to the default project The migration process fails when migrating an image-based VM from OpenStack to the default project. (MTV-964) For a complete list of all known issues in this release, see the list of Known Issues in Jira. | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/release_notes/rn-26_release-notes |
Chapter 12. Configuring routing | Chapter 12. Configuring routing Routing is the process by which messages are delivered to their destinations. To accomplish this, AMQ Interconnect provides two routing mechanisms: message routing and link routing . Message routing Message routing is the default routing mechanism. You can use it to route messages on a per-message basis between clients directly (direct-routed messaging), or to and from broker queues (brokered messaging). Link routing A link route represents a private messaging path between a sender and a receiver in which the router passes the messages between end points. You can use it to connect a client to a service (such as a broker queue). 12.1. Configuring message routing Message routing is the default routing mechanism. You can use it to route messages on a per-message basis between clients directly (direct-routed messaging), or to and from broker queues (brokered messaging). With message routing, you can do the following: Understand message routing concepts Configure address semantics (route messages between clients) Configure addresses for prioritized message delivery Configure brokered messaging Understand address pattern matching 12.1.1. Understanding message routing With message routing, routing is performed on messages as producers send them to a router. When a message arrives on a router, the router routes the message and its settlement based on the message's address and routing pattern . 12.1.1.1. Message routing flow control AMQ Interconnect uses a credit-based flow control mechanism to ensure that producers can only send messages to a router if at least one consumer is available to receive them. Because AMQ Interconnect does not store messages, this credit-based flow control prevents producers from sending messages when there are no consumers present. A client wishing to send a message to the router must wait until the router has provided it with credit. Attempting to publish a message without credit available will cause the client to block. Once credit is made available, the client will unblock, and the message will be sent to the router. Note Most AMQP client libraries enable you to determine the amount of credit available to a producer. For more information, consult your client's documentation. 12.1.1.2. Addresses Addresses determine how messages flow through your router network. An address designates an endpoint in your messaging network, such as: Endpoint processes that consume data or offer a service Topics that match multiple consumers to multiple producers Entities within a messaging broker: Queues Durable Topics Exchanges When a router receives a message, it uses the message's address to determine where to send the message (either its destination or one step closer to its destination). AMQ Interconnect considers addresses to be mobile in that any user of an address may be directly connected to any router in the router network and may even move around the topology. In cases where messages are broadcast to or balanced across multiple consumers, the users of the address may be connected to multiple routers in the network. Mobile addresses may be discovered during normal router operation or configured through management settings. 12.1.1.3. Routing patterns Routing patterns define the paths that a message with a mobile address can take across a network. These routing patterns can be used for both direct routing, in which the router distributes messages between clients without a broker, and indirect routing, in which the router enables clients to exchange messages through a broker. Routing patterns fall into two categories: Anycast (Balanced and Closest) and Multicast. There is no concept of "unicast" in which there is only one consumer for an address. Anycast distribution delivers each message to one consumer whereas multicast distribution delivers each message to all consumers. Each address has one of the following routing patterns, which define the path that a message with the address can take across the messaging network: Balanced An anycast method that allows multiple consumers to use the same address. Each message is delivered to a single consumer only, and AMQ Interconnect attempts to balance the traffic load across the router network. If multiple consumers are attached to the same address, each router determines which outbound path should receive a message by considering each path's current number of unsettled deliveries. This means that more messages will be delivered along paths where deliveries are settled at higher rates. Note AMQ Interconnect neither measures nor uses message settlement time to determine which outbound path to use. In this scenario, the messages are spread across both receivers regardless of path length: Figure 12.1. Balanced Message Routing Closest An anycast method in which every message is sent along the shortest path to reach the destination, even if there are other consumers for the same address. AMQ Interconnect determines the shortest path based on the topology cost to reach each of the consumers. If there are multiple consumers with the same lowest cost, messages will be spread evenly among those consumers. In this scenario, all messages sent by Sender will be delivered to Receiver 1 : Figure 12.2. Closest Message Routing Multicast Messages are sent to all consumers attached to the address. Each consumer will receive one copy of the message. In this scenario, all messages are sent to all receivers: Figure 12.3. Multicast Message Routing 12.1.1.4. Message settlement and reliability AMQ Interconnect can deliver messages with the following degrees of reliability: At most once At least once Exactly once The level of reliability is negotiated between the producer and the router when the producer establishes a link to the router. To achieve the negotiated level of reliability, AMQ Interconnect treats all messages as either pre-settled or unsettled . Pre-settled Sometimes called fire and forget , the router settles the incoming and outgoing deliveries and propagates the settlement to the message's destination. However, it does not guarantee delivery. Unsettled AMQ Interconnect propagates the settlement between the producer and consumer. For an anycast address, the router associates the incoming delivery with the resulting outgoing delivery. Based on this association, the router propagates changes in delivery state from the consumer to the producer. For a multicast address, the router associates the incoming delivery with all outbound deliveries. The router waits for each consumer to set their delivery's final state. After all outgoing deliveries have reached their final state, the router sets a final delivery state for the original inbound delivery and passes it to the producer. The following table describes the reliability guarantees for unsettled messages sent to an anycast or multicast address: Final disposition Anycast Multicast accepted The consumer accepted the message. At least one consumer accepted the message, but no consumers rejected it. released The message did not reach its destination. The message did not reach any of the consumers. modified The message may or may not have reached its destination. The delivery is considered to be "in-doubt" and should be re-sent if "at least once" delivery is required. The message may or may not have reached any of the consumers. However, no consumers rejected or accepted it. rejected The consumer rejected the message. At least one consumer rejected the message. 12.1.2. Configuring address semantics You can route messages between clients without using a broker. In a brokerless scenario (sometimes called direct-routed messaging ), AMQ Interconnect routes messages between clients directly. To route messages between clients, you configure an address with a routing distribution pattern. When a router receives a message with this address, the message is routed to its destination or destinations based on the address's routing distribution pattern. Procedure In the /etc/qpid-dispatch/qdrouterd.conf configuration file, add an address section. prefix | pattern The address or group of addresses to which the address settings should be applied. You can specify a prefix to match an exact address or beginning segment of an address. Alternatively, you can specify a pattern to match an address using wildcards. A prefix matches either an exact address or the beginning segment within an address that is delimited by either a . or / character. For example, the prefix my_address would match the address my_address as well as my_address.1 and my_address/1 . However, it would not match my_address1 . A pattern matches an address that corresponds to a pattern. A pattern is a sequence of words delimited by either a . or / character. You can use wildcard characters to represent a word. The * character matches exactly one word, and the # character matches any sequence of zero or more words. The * and # characters are reserved as wildcards. Therefore, you should not use them in the message address. For more information about creating address patterns, see Section 12.1.5, "Address pattern matching" . Note You can convert a prefix value to a pattern by appending /# to it. For example, the prefix a/b/c is equivalent to the pattern a/b/c/# . distribution The message distribution pattern. The default is balanced , but you can specify any of the following options: balanced - Messages sent to the address will be routed to one of the receivers, and the routing network will attempt to balance the traffic load based on the rate of settlement. closest - Messages sent to the address are sent on the shortest path to reach the destination. It means that if there are multiple receivers for the same address, only the closest one will receive the message. multicast - Messages are sent to all receivers that are attached to the address in a publish/subscribe model. For more information about message distribution patterns, see Section 12.1.1.3, "Routing patterns" . For information about additional attributes, see address in the qdrouterd.conf man page. Add the same address section to any other routers that need to use the address. The address that you added to this router configuration file only controls how this router distributes messages sent to the address. If you have additional routers in your router network that should distribute messages for this address, then you must add the same address section to each of their configuration files. 12.1.3. Configuring addresses for prioritized message delivery You can set the priority level of an address to control how AMQ Interconnect processes messages sent to that address. Within the scope of a connection, AMQ Interconnect attempts to process messages based on their priority. For a connection with a large volume of messages in flight, this lowers the latency for higher-priority messages. Assigning a high priority level to an address does not guarantee that messages sent to the address will be delivered before messages sent to lower-priority addresses. However, higher-priority messages will travel more quickly through the router network than they otherwise would. Note You can also control the priority level of individual messages by setting the priority level in the message header. However, the address priority takes precedence: if you send a prioritized message to an address with a different priority level, the router will use the address priority level. Procedure In the /etc/qpid-dispatch/qdrouterd.conf configuration file, add or edit an address and assign a priority level. This example adds an address with the highest priority level. The router will attempt to deliver messages sent to this address before messages with lower priority levels. priority The priority level to assign to all messages sent to this address. The range of valid priority levels is 0-9, in which the higher the number, the higher the priority. The default is 4. Additional resources For more information about setting the priority level in a message, see the AMQP 1.0 specification . 12.1.4. Configuring brokered messaging If you require "store and forward" capabilities, you can configure AMQ Interconnect to use brokered messaging. In this scenario, clients connect to a router to send and receive messages, and the router routes the messages to or from queues on a message broker. You can configure the following: Route messages through broker queues You can route messages to a queue hosted on a single broker, or route messages to a sharded queue distributed across multiple brokers. Store and retrieve undeliverable messages on a broker queue 12.1.4.1. How AMQ Interconnect enables brokered messaging Brokered messaging enables AMQ Interconnect to store messages on a broker queue. This requires a connection to the broker, a waypoint address to represent the broker queue, and autolinks to attach to the waypoint address. An autolink is a link that is automatically created by the router to attach to a waypoint address. With autolinks, client traffic is handled on the router, not the broker. Clients attach their links to the router, and then the router uses internal autolinks to connect to the queue on the broker. Therefore, the queue will always have a single producer and a single consumer regardless of how many clients are attached to the router. Using autolinks is a form of message routing , as distinct from link routing . It is recommended to use link routing if you want to use semantics associated with a consumer, for example, the undeliverable-here=true modified delivery state. Figure 12.4. Brokered messaging In this diagram, the sender connects to the router and sends messages to my_queue. The router attaches an outgoing link to the broker, and then sends the messages to my_queue. Later, the receiver connects to the router and requests messages from my_queue. The router attaches an incoming link to the broker to receive the messages from my_queue, and then delivers them to the receiver. You can also route messages to a sharded queue , which is a single, logical queue comprised of multiple, underlying physical queues. Using queue sharding, it is possible to distribute a single queue over multiple brokers. Clients can connect to any of the brokers that hold a shard to send and receive messages. Figure 12.5. Brokered messaging with sharded queue In this diagram, a sharded queue (my_queue) is distributed across two brokers. The router is connected to the clients and to both brokers. The sender connects to the router and sends messages to my_queue. The router attaches an outgoing link to each broker, and then sends messages to each shard (by default, the routing distribution is balanced ). Later, the receiver connects to the router and requests all of the messages from my_queue. The router attaches an incoming link to one of the brokers to receive the messages from my_queue, and then delivers them to the receiver. 12.1.4.2. Routing messages through broker queues You can route messages to and from a broker queue to provide clients with access to the queue through a router. In this scenario, clients connect to a router to send and receive messages, and the router routes the messages to or from the broker queue. You can route messages to a queue hosted on a single broker, or route messages to a sharded queue distributed across multiple brokers. Procedure In the /etc/qpid-dispatch/qdrouterd.conf configuration file, add a waypoint address for the broker queue. A waypoint address identifies a queue on a broker to which you want to route messages. This example adds a waypoint address for the my_queue queue: prefix | pattern The address prefix or pattern that matches the broker queue to which you want to send messages. You can specify a prefix to match an exact address or beginning segment of an address. Alternatively, you can specify a pattern to match an address using wildcards. A prefix matches either an exact address or the beginning segment within an address that is delimited by either a . or / character. For example, the prefix my_address would match the address my_address as well as my_address.1 and my_address/1 . However, it would not match my_address1 . A pattern matches an address that corresponds to a pattern. A pattern is a sequence of words delimited by either a . or / character. You can use wildcard characters to represent a word. The * character matches exactly one word, and the # character matches any sequence of zero or more words. The * and # characters are reserved as wildcards. Therefore, you should not use them in the message address. For more information about creating address patterns, see Section 12.1.5, "Address pattern matching" . Note You can convert a prefix value to a pattern by appending /# to it. For example, the prefix a/b/c is equivalent to the pattern a/b/c/# . waypoint Set this attribute to yes so that the router handles messages sent to this address as a waypoint. Connect the router to the broker. Add an outgoing connection to the broker if one does not exist. If the queue is sharded across multiple brokers, you must add a connection for each broker. For more information, see Section 8.3, "Connecting to external AMQP containers" . Note If the connection to the broker fails, AMQ Interconnect automatically attempts to reestablish the connection and reroute message deliveries to any available alternate destinations. However, some deliveries could be returned to the sender with a RELEASED or MODIFIED disposition. Therefore, you should ensure that your clients can handle these deliveries appropriately (generally by resending them). If you want to send messages to the broker queue, add an outgoing autolink to the broker queue. If the queue is sharded across multiple brokers, you must add an outgoing autolink for each broker. This example configures an outgoing auto link to send messages to a broker queue: address The address of the broker queue. When the autolink is created, it will be attached to this address. externalAddress An optional alternate address for the broker queue. You use an external address if the broker queue should have a different address than that which the sender uses. In this scenario, senders send messages to the address address, and then the router routes them to the broker queue represented by the externalAddress address. connection | containerID How the router should connect to the broker. You can specify either an outgoing connection ( connection ) or the container ID of the broker ( containerID ). direction Set this attribute to out to specify that this autolink can send messages from the router to the broker. For information about additional attributes, see autoLink in the qdrouterd.conf man page. If you want to receive messages from the broker queue, add an incoming autolink from the broker queue: If the queue is sharded across multiple brokers, you must add an outgoing autolink for each broker. This example configures an incoming auto link to receive messages from a broker queue: address The address of the broker queue. When the autolink is created, it will be attached to this address. externalAddress An optional alternate address for the broker queue. You use an external address if the broker queue should have a different address than that which the receiver uses. In this scenario, receivers receive messages from the address address, and the router retrieves them from the broker queue represented by the externalAddress address. connection | containerID How the router should connect to the broker. You can specify either an outgoing connection ( connection ) or the container ID of the broker ( containerID ). direction Set this attribute to in to specify that this autolink can receive messages from the broker to the router. For information about additional attributes, see autoLink in the qdrouterd.conf man page. 12.1.4.3. Handling undeliverable messages You handle undeliverable messages for an address by configuring autolinks that point to fallback destinations . A fallback destination (such as a queue on a broker) stores messages that are not directly routable to any consumers. During normal message delivery, AMQ Interconnect delivers messages to the consumers that are attached to the router network. However, if no consumers are reachable, the messages are diverted to any fallback destinations that were configured for the address (if the autolinks that point to the fallback destinations are active). When a consumer reconnects and becomes reachable again, it receives the messages stored at the fallback destination. Note AMQ Interconnect preserves the original delivery order for messages stored at a fallback destination. However, when a consumer reconnects, any new messages produced while the queue is draining will be interleaved with the messages stored at the fallback destination. Prerequisites The router is connected to a broker. For more information, see Section 8.3, "Connecting to external AMQP containers" . Procedure This procedure enables fallback for an address and configures autolinks to connect to the broker queue that provides the fallback destination for the address. In the /etc/qpid-dispatch/qdrouterd.conf configuration file, enable fallback destinations for the address. Add an outgoing autolink to a queue on the broker. For the address for which you enabled fallback, if messages are not routable to any consumers, the router will use this autolink to send the messages to a queue on the broker. If you want the router to send queued messages to attached consumers as soon as they connect to the router network, add an incoming autolink. As soon as a consumer attaches to the router, it will receive the messages stored in the broker queue, along with any new messages sent by the producer. The original delivery order of the queued messages is preserved; however, the queued messages will be interleaved with the new messages. If you do not add the incoming autolink, the messages will be stored on the broker, but will not be sent to consumers when they attach to the router. 12.1.5. Address pattern matching In some router configuration scenarios, you might need to use pattern matching to match a range of addresses rather than a single, literal address. Address patterns match any address that corresponds to the pattern. An address pattern is a sequence of tokens (typically words) that are delimited by either . or / characters. They also can contain special wildcard characters that represent words: * represents exactly one word # represents zero or more words Example 12.1. Address pattern This address contains two tokens, separated by the / delimiter: my/address Example 12.2. Address pattern with wildcard This address contains three tokens. The * is a wildcard, representing any single word that might be between my and address : my/*/address The following table shows some address patterns and examples of the addresses that would match them: This pattern... Matches... But not... news/* news/europe news/usa news news/usa/sports news/# news news/europe news/usa/sports europe usa news/europe/# news/europe news/europe/sports news/europe/politics/fr news/usa europe news/*/sports news/europe/sports news/usa/sports news news/europe/fr/sports 12.2. Creating link routes A link route represents a private messaging path between a sender and a receiver in which the router passes the messages between end points. You can use it to connect a client to a service (such as a broker queue). 12.2.1. Understanding link routing Link routing provides an alternative strategy for brokered messaging. A link route represents a private messaging path between a sender and a receiver in which the router passes the messages between end points. You can think of a link route as a "virtual connection" or "tunnel" that travels from a sender, through the router network, to a receiver. With link routing, routing is performed on link-attach frames, which are chained together to form a virtual messaging path that directly connects a sender and receiver. Once a link route is established, the transfer of message deliveries, flow frames, and dispositions is performed across the link route. 12.2.1.1. Link routing flow control Unlike message routing, with link routing, the sender and receiver handle flow control directly: the receiver grants link credits, which is the number of messages it is able to receive. The router sends them directly to the sender, and then the sender sends the messages based on the credits that the receiver granted. 12.2.1.2. Link route addresses A link route address represents a broker queue, topic, or other service. When a client attaches a link route address to a router, the router propagates a link attachment to the broker resource identified by the address. Using link route addresses, the router network does not participate in aggregated message distribution. The router simply passes message delivery and settlement between the two end points. 12.2.1.3. Routing patterns for link routing Routing patterns are not used with link routing, because there is a direct link between the sender and receiver. The router only makes a routing decision when it receives the initial link-attach request frame. Once the link is established, the router passes the messages along the link in a balanced distribution. 12.2.2. Creating a link route Link routes establish a link between a sender and a receiver that travels through a router. You can configure inward and outward link routes to enable the router to receive link-attaches from clients and to send them to a particular destination. With link routing, client traffic is handled on the broker, not the router. Clients have a direct link through the router to a broker's queue. Therefore, each client is a separate producer or consumer. Note If the connection to the broker fails, the routed links are detached, and the router will attempt to reconnect to the broker (or its backup). Once the connection is reestablished, the link route to the broker will become reachable again. From the client's perspective, the client will see the detached links (that is, the senders or receivers), but not the failed connection. Therefore, if you want the client to reattach dropped links in the event of a broker connection failure, you must configure this functionality on the client. Alternatively, you can use message routing with autolinks instead of link routing. For more information, see Section 12.1.4.2, "Routing messages through broker queues" . Procedure Add an outgoing connection to the broker if one does not exist. If the queue is sharded across multiple brokers, you must add a connection for each broker. For more information, see Section 8.3, "Connecting to external AMQP containers" . If you want clients to send local transactions to the broker, create a link route for the transaction coordinator: 1 The USDcoordinator prefix designates this link route as a transaction coordinator. When the client opens a transacted session, the requests to start and end the transaction are propagated along this link route to the broker. AMQ Interconnect does not support routing transactions to multiple brokers. If you have multiple brokers in your environment, choose a single broker and route all transactions to it. If you want clients to send messages on this link route, create an incoming link route: prefix | pattern The address prefix or pattern that matches the broker queue that should be the destination for routed link-attaches. All messages that match this prefix or pattern will be distributed along the link route. You can specify a prefix to match an exact address or beginning segment of an address. Alternatively, you can specify a pattern to match an address using wildcards. A prefix matches either an exact address or the beginning segment within an address that is delimited by either a . or / character. For example, the prefix my_address would match the address my_address as well as my_address.1 and my_address/1 . However, it would not match my_address1 . A pattern matches an address that corresponds to a pattern. A pattern is a sequence of words delimited by either a . or / character. You can use wildcard characters to represent a word. The * character matches exactly one word, and the # character matches any sequence of zero or more words. The * and # characters are reserved as wildcards. Therefore, you should not use them in the message address. For more information about creating address patterns, see Section 12.1.5, "Address pattern matching" . Note You can convert a prefix value to a pattern by appending /# to it. For example, the prefix a/b/c is equivalent to the pattern a/b/c/# . connection | containerID How the router should connect to the broker. You can specify either an outgoing connection ( connection ) or the container ID of the broker ( containerID ). If multiple brokers are connected to the router through this connection, requests for addresses matching the link route's prefix or pattern are balanced across the brokers. Alternatively, if you want to specify a particular broker, use containerID and add the broker's container ID. direction Set this attribute to in to specify that clients can send messages into the router network on this link route. For information about additional attributes, see linkRoute in the qdrouterd.conf man page. If you want clients to receive messages on this link route, create an outgoing link route: prefix | pattern The address prefix or pattern that matches the broker queue from which you want to receive routed link-attaches. All messages that match this prefix or pattern will be distributed along the link route. You can specify a prefix to match an exact address or beginning segment of an address. Alternatively, you can specify a pattern to match an address using wildcards. A prefix matches either an exact address or the beginning segment within an address that is delimited by either a . or / character. For example, the prefix my_address would match the address my_address as well as my_address.1 and my_address/1 . However, it would not match my_address1 . A pattern matches an address that corresponds to a pattern. A pattern is a sequence of words delimited by either a . or / character. You can use wildcard characters to represent a word. The * character matches exactly one word, and the # character matches any sequence of zero or more words. The * and # characters are reserved as wildcards. Therefore, you should not use them in the message address. For more information about creating address patterns, see Section 12.1.5, "Address pattern matching" . Note You can convert a prefix value to a pattern by appending /# to it. For example, the prefix a/b/c is equivalent to the pattern a/b/c/# . connection | containerID How the router should connect to the broker. You can specify either an outgoing connection ( connection ) or the container ID of the broker ( containerID ). If multiple brokers are connected to the router through this connection, requests for addresses matching the link route's prefix or pattern are balanced across the brokers. Alternatively, if you want to specify a particular broker, use containerID and add the broker's container ID. direction Set this attribute to out to specify that this link route is for receivers. For information about additional attributes, see linkRoute in the qdrouterd.conf man page. 12.2.3. Link route example: Connecting clients and brokers on different networks This example shows how a link route can connect a client to a message broker that is on a different private network. Figure 12.6. Router network with isolated clients The client is constrained by firewall policy to connect to the router in its own network ( R3 ). However, it can use a link route to access queues, topics, and any other AMQP services that are provided on message brokers B1 and B2 - even though they are on different networks. In this example, the client needs to receive messages from b2.event-queue , which is hosted on broker B2 in Private Network 1 . A link route connects the client and broker even though neither of them is aware that there is a router network between them. Router configuration To enable the client to receive messages from b2.event-queue on broker B2 , router R2 must be able to do the following: Connect to broker B2 Route links to and from broker B2 Advertise itself to the router network as a valid destination for links that have a b2.event-queue address The relevant part of the configuration file for router R2 shows the following: 1 The outgoing connection from the router to broker B2 . The route-container role enables the router to connect to an external AMQP container (in this case, a broker). 2 The incoming link route for receiving links from client senders. Any sender with a target whose address begins with b2 will be routed to broker B2 using the broker connector. 3 The outgoing link route for sending links to client receivers. Any receivers whose source address begins with b2 will be routed to broker B2 using the broker connector. This configuration enables router R2 to advertise itself as a valid destination for targets and sources starting with b2 . It also enables the router to connect to broker B2 , and to route links to and from queues starting with the b2 prefix. Note While not required, routers R1 and R3 should also have the same configuration. How the client receives messages By using the configured link route, the client can receive messages from broker B2 even though they are on different networks. Router R2 establishes a connection to broker B2 . Once the connection is open, R2 tells the other routers ( R1 and R3 ) that it is a valid destination for link routes to the b2 prefix. This means that sender and receiver links attached to R1 or R3 will be routed along the shortest path to R2 , which then routes them to broker B2 . To receive messages from the b2.event-queue on broker B2 , the client attaches a receiver link with a source address of b2.event-queue to its local router, R3 . Because the address matches the b2 prefix, R3 routes the link to R1 , which is the hop in the route to its destination. R1 routes the link to R2 , which routes it to broker B2 . The client now has a receiver established, and it can begin receiving messages. Note If broker B2 is unavailable for any reason, router R2 will not advertise itself as a destination for b2 addresses. In this case, routers R1 and R3 will reject link attaches that should be routed to broker B2 with an error message indicating that there is no route available to the destination. | [
"address { prefix: my_address distribution: multicast }",
"address { prefix: my-high-priority-address priority: 9 }",
"address { prefix: my_queue waypoint: yes }",
"autoLink { address: my_queue connection: my_broker direction: out }",
"autoLink { address: my_queue connection: my_broker direction: in }",
"address { prefix: my_address enableFallback: yes }",
"autoLink { address: my_address.2 direction: out connection: my_broker fallback: yes }",
"autoLink { address: my_address.2 direction: in connection: my_broker fallback: yes }",
"linkRoute { prefix: USDcoordinator 1 connection: my_broker direction: in }",
"linkRoute { prefix: my_queue connection: my_broker direction: in }",
"linkRoute { prefix: my_queue connection: my_broker direction: out }",
"connector { 1 name: broker role: route-container host: 192.0.2.1 port: 61617 saslMechanisms: ANONYMOUS } linkRoute { 2 prefix: b2 direction: in connection: broker } linkRoute { 3 prefix: b2 direction: out connection: broker }"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/configuring-routing-router-rhel |
39.4. Migrating over SSL | 39.4. Migrating over SSL To encrypt the data transmission between LDAP and IdM during a migration: Store the certificate of the CA, that issued the remote LDAP server's certificate, in a file on the IdM server. For example: /etc/ipa/remote.crt . Follow the steps described in Section 39.3, "Migrating an LDAP Server to Identity Management" . However for an encrypted LDAP connection during the migration, use the ldaps protocol in the URL and pass the --ca-cert-file option to the command. For example: | [
"ipa migrate-ds --ca-cert-file= /etc/ipa/remote.crt ldaps:// ldap.example.com :636"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/migrationg-ldap-ssl |
Chapter 26. String and data retrieving functions Tapset | Chapter 26. String and data retrieving functions Tapset Functions to retrieve strings and other primitive types from the kernel or a user space programs based on addresses. All strings are of a maximum length given by MAXSTRINGLEN. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/conversions-dot-stp |
Chapter 2. Configuring Data Grid Servers | Chapter 2. Configuring Data Grid Servers Apply custom Data Grid Server configuration to your deployments. 2.1. Customizing Data Grid Server configuration Apply custom deploy.infinispan values to Data Grid clusters that configure the Cache Manager and underlying server mechanisms like security realms or Hot Rod and REST endpoints. Important You must always provide a complete Data Grid Server configuration when you modify deploy.infinispan values. Note Do not modify or remove the default "metrics" configuration if you want to use monitoring capabilities for your Data Grid cluster. Procedure Modify Data Grid Server configuration as required: Specify configuration values for the Cache Manager with deploy.infinispan.cacheContainer fields. For example, you can create caches at startup with any Data Grid configuration or add cache templates and use them to create caches on demand. Configure security authorization to control user roles and permissions with the deploy.infinispan.cacheContainer.security.authorization field. Select one of the default JGroups stacks or configure cluster transport with the deploy.infinispan.cacheContainer.transport fields. Configure Data Grid Server endpoints with the deploy.infinispan.server.endpoints fields. Configure Data Grid Server network interfaces and ports with the deploy.infinispan.server.interfaces and deploy.infinispan.server.socketBindings fields. Configure Data Grid Server security mechanisms with the deploy.infinispan.server.security fields. 2.2. Data Grid Server configuration values Data Grid Server configuration values let you customize the Cache Manager and modify server instances that run in OpenShift pods. Data Grid Server configuration deploy: infinispan: cacheContainer: # [USER] Add cache, template, and counter configuration. name: default # [USER] Specify `security: null` to disable security authorization. security: authorization: {} transport: cluster: USD{infinispan.cluster.name:cluster} node-name: USD{infinispan.node.name:} stack: kubernetes server: endpoints: # [USER] Hot Rod and REST endpoints. - securityRealm: default socketBinding: default # [METRICS] Metrics endpoint for cluster monitoring capabilities. - connectors: rest: restConnector: authentication: mechanisms: BASIC securityRealm: metrics socketBinding: metrics interfaces: - inetAddress: value: USD{infinispan.bind.address:127.0.0.1} name: public security: credentialStores: - clearTextCredential: clearText: secret name: credentials path: credentials.pfx securityRealms: # [USER] Security realm for the Hot Rod and REST endpoints. - name: default # [USER] Comment or remove this properties realm to disable authentication. propertiesRealm: groupProperties: path: groups.properties groupsAttribute: Roles userProperties: path: users.properties # [METRICS] Security realm for the metrics endpoint. - name: metrics propertiesRealm: groupProperties: path: metrics-groups.properties relativeTo: infinispan.server.config.path groupsAttribute: Roles userProperties: path: metrics-users.properties plainText: true relativeTo: infinispan.server.config.path socketBindings: defaultInterface: public portOffset: USD{infinispan.socket.binding.port-offset:0} socketBinding: # [USER] Socket binding for the Hot Rod and REST endpoints. - name: default port: 11222 # [METRICS] Socket binding for the metrics endpoint. - name: metrics port: 11223 Data Grid cache configuration deploy: infinispan: cacheContainer: distributedCache: name: "mycache" mode: "SYNC" owners: "2" segments: "256" capacityFactor: "1.0" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" partitionHandling: whenSplit: "ALLOW_READ_WRITES" mergePolicy: "PREFERRED_NON_NULL" #Provide additional Cache Manager configuration. server: #Provide configuration for server instances. Cache template deploy: infinispan: cacheContainer: distributedCacheConfiguration: name: "my-dist-template" mode: "SYNC" statistics: "true" encoding: mediaType: "application/x-protostream" expiration: lifespan: "5000" maxIdle: "1000" memory: maxCount: "1000000" whenFull: "REMOVE" #Provide additional Cache Manager configuration. server: #Provide configuration for server instances. Cluster transport deploy: infinispan: cacheContainer: transport: #Specifies the name of a default JGroups stack. stack: kubernetes #Provide additional Cache Manager configuration. server: #Provide configuration for server instances. Additional resources Data Grid Server Guide Configuring Data Grid | [
"deploy: infinispan: cacheContainer: # [USER] Add cache, template, and counter configuration. name: default # [USER] Specify `security: null` to disable security authorization. security: authorization: {} transport: cluster: USD{infinispan.cluster.name:cluster} node-name: USD{infinispan.node.name:} stack: kubernetes server: endpoints: # [USER] Hot Rod and REST endpoints. - securityRealm: default socketBinding: default # [METRICS] Metrics endpoint for cluster monitoring capabilities. - connectors: rest: restConnector: authentication: mechanisms: BASIC securityRealm: metrics socketBinding: metrics interfaces: - inetAddress: value: USD{infinispan.bind.address:127.0.0.1} name: public security: credentialStores: - clearTextCredential: clearText: secret name: credentials path: credentials.pfx securityRealms: # [USER] Security realm for the Hot Rod and REST endpoints. - name: default # [USER] Comment or remove this properties realm to disable authentication. propertiesRealm: groupProperties: path: groups.properties groupsAttribute: Roles userProperties: path: users.properties # [METRICS] Security realm for the metrics endpoint. - name: metrics propertiesRealm: groupProperties: path: metrics-groups.properties relativeTo: infinispan.server.config.path groupsAttribute: Roles userProperties: path: metrics-users.properties plainText: true relativeTo: infinispan.server.config.path socketBindings: defaultInterface: public portOffset: USD{infinispan.socket.binding.port-offset:0} socketBinding: # [USER] Socket binding for the Hot Rod and REST endpoints. - name: default port: 11222 # [METRICS] Socket binding for the metrics endpoint. - name: metrics port: 11223",
"deploy: infinispan: cacheContainer: distributedCache: name: \"mycache\" mode: \"SYNC\" owners: \"2\" segments: \"256\" capacityFactor: \"1.0\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"5000\" maxIdle: \"1000\" memory: maxCount: \"1000000\" whenFull: \"REMOVE\" partitionHandling: whenSplit: \"ALLOW_READ_WRITES\" mergePolicy: \"PREFERRED_NON_NULL\" #Provide additional Cache Manager configuration. server: #Provide configuration for server instances.",
"deploy: infinispan: cacheContainer: distributedCacheConfiguration: name: \"my-dist-template\" mode: \"SYNC\" statistics: \"true\" encoding: mediaType: \"application/x-protostream\" expiration: lifespan: \"5000\" maxIdle: \"1000\" memory: maxCount: \"1000000\" whenFull: \"REMOVE\" #Provide additional Cache Manager configuration. server: #Provide configuration for server instances.",
"deploy: infinispan: cacheContainer: transport: #Specifies the name of a default JGroups stack. stack: kubernetes #Provide additional Cache Manager configuration. server: #Provide configuration for server instances."
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/building_and_deploying_data_grid_clusters_with_helm/configuring-servers |
Chapter 1. Release notes for Logging | Chapter 1. Release notes for Logging Logging Compatibility The logging subsystem for Red Hat OpenShift is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. 1.1. Logging 5.5.8 This release includes OpenShift Logging Bug Fix Release 5.5.8 . 1.1.1. Bug fixes Before this update, the priority field was missing from systemd logs due to an error in how the collector set level fields. With this update, these fields are set correctly, resolving the issue. ( LOG-3630 ) 1.1.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-24999 CVE-2022-40897 CVE-2022-41222 CVE-2022-41717 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 1.2. Logging 5.5.7 This release includes OpenShift Logging Bug Fix Release 5.5.7 . 1.2.1. Bug fixes Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. ( LOG-3534 ) Before this update, the ClusterLogForwarder custom resource (CR) did not pass TLS credentials for syslog output to Fluentd, resulting in errors during forwarding. With this update, credentials pass correctly to Fluentd, resolving the issue. ( LOG-3533 ) 1.2.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.3. Logging 5.5.6 This release includes OpenShift Logging Bug Fix Release 5.5.6 . 1.3.1. Known issues 1.3.2. Bug fixes Before this update, the Pod Security admission controller added the label podSecurityLabelSync = true to the openshift-logging namespace. This resulted in our specified security labels being overwritten, and as a result Collector pods would not start. With this update, the label podSecurityLabelSync = false preserves security labels. Collector pods deploy as expected. ( LOG-3340 ) Before this update, the Operator installed the console view plugin, even when it was not enabled on the cluster. This caused the Operator to crash. With this update, if an account for a cluster does not have the console view enabled, the Operator functions normally and does not install the console view. ( LOG-3407 ) Before this update, a prior fix to support a regression where the status of the Elasticsearch deployment was not being updated caused the Operator to crash unless the Red Hat Elasticsearch Operator was deployed. With this update, that fix has been reverted so the Operator is now stable but re-introduces the issue related to the reported status. ( LOG-3428 ) Before this update, the Loki Operator only deployed one replica of the LokiStack gateway regardless of the chosen stack size. With this update, the number of replicas is correctly configured according to the selected size. ( LOG-3478 ) Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. ( LOG-3341 ) Before this update, the logging view plugin contained an incompatible feature for certain versions of OpenShift Container Platform. With this update, the correct release stream of the plugin resolves the issue. ( LOG-3467 ) Before this update, the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of one or more pipelines causing the collector pods to restart every 8-10 seconds. With this update, reconciliation of the ClusterLogForwarder custom resource processes correctly, resolving the issue. ( LOG-3469 ) Before this change the spec for the outputDefaults field of the ClusterLogForwarder custom resource would apply the settings to every declared Elasticsearch output type. This change corrects the behavior to match the enhancement specification where the setting specifically applies to the default managed Elasticsearch store. ( LOG-3342 ) Before this update, the the OpenShift CLI (oc) must-gather script did not complete because the OpenShift CLI (oc) needs a folder with write permission to build its cache. With this update, the OpenShift CLI (oc) has write permissions to a folder, and the must-gather script completes successfully. ( LOG-3472 ) Before this update, the Loki Operator webhook server caused TLS errors. With this update, the Loki Operator webhook PKI is managed by the Operator Lifecycle Manager's dynamic webhook management resolving the issue. ( LOG-3511 ) 1.3.3. CVEs CVE-2021-46848 CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 CVE-2022-2953 CVE-2022-2964 CVE-2022-4139 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.4. Logging 5.5.5 This release includes OpenShift Logging Bug Fix Release 5.5.5 . 1.4.1. Bug fixes Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3305 ) Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. ( LOG-3284 ) Before this update, the FluentdQueueLengthIncreasing alert could fail to fire when there was a cardinality issue with the set of labels returned from this alert expression. This update reduces labels to only include those required for the alert. ( LOG-3226 ) Before this update, Loki did not have support to reach an external storage in a disconnected cluster. With this update, proxy environment variables and proxy trusted CA bundles are included in the container image to support these connections. ( LOG-2860 ) Before this update, OpenShift Container Platform web console users could not choose the ConfigMap object that includes the CA certificate for Loki, causing pods to operate without the CA. With this update, web console users can select the config map, resolving the issue. ( LOG-3310 ) Before this update, the CA key was used as volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters (such as dots). With this update, the volume name is standardized to an internal string which resolves the issue. ( LOG-3332 ) 1.4.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36516 CVE-2020-36558 CVE-2021-3640 CVE-2021-30002 CVE-2022-0168 CVE-2022-0561 CVE-2022-0562 CVE-2022-0617 CVE-2022-0854 CVE-2022-0865 CVE-2022-0891 CVE-2022-0908 CVE-2022-0909 CVE-2022-0924 CVE-2022-1016 CVE-2022-1048 CVE-2022-1055 CVE-2022-1184 CVE-2022-1292 CVE-2022-1304 CVE-2022-1355 CVE-2022-1586 CVE-2022-1785 CVE-2022-1852 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2078 CVE-2022-2097 CVE-2022-2509 CVE-2022-2586 CVE-2022-2639 CVE-2022-2938 CVE-2022-3515 CVE-2022-20368 CVE-2022-21499 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-22844 CVE-2022-23960 CVE-2022-24448 CVE-2022-25255 CVE-2022-26373 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-27950 CVE-2022-28390 CVE-2022-28893 CVE-2022-29581 CVE-2022-30293 CVE-2022-34903 CVE-2022-36946 CVE-2022-37434 CVE-2022-39399 1.5. Logging 5.5.4 This release includes RHSA-2022:7434-OpenShift Logging Bug Fix Release 5.5.4 . 1.5.1. Bug fixes Before this update, an error in the query parser of the logging view plugin caused parts of the logs query to disappear if the query contained curly brackets {} . This made the queries invalid, leading to errors being returned for valid queries. With this update, the parser correctly handles these queries. ( LOG-3042 ) Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. ( LOG-3049 ) Before this update, no alerts were implemented to support the collector implementation of Vector. This change adds Vector alerts and deploys separate alerts, depending upon the chosen collector implementation. ( LOG-3127 ) Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. ( LOG-3138 ) Before this update, a prior refactoring of the logging must-gather scripts removed the expected location for the artifacts. This update reverts that change to write artifacts to the /must-gather folder. ( LOG-3213 ) Before this update, on certain clusters, the Prometheus exporter would bind on IPv4 instead of IPv6. After this update, Fluentd detects the IP version and binds to 0.0.0.0 for IPv4 or [::] for IPv6. ( LOG-3162 ) 1.5.2. CVEs CVE-2020-35525 CVE-2020-35527 CVE-2022-0494 CVE-2022-1353 CVE-2022-2509 CVE-2022-2588 CVE-2022-3515 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-23816 CVE-2022-23825 CVE-2022-29900 CVE-2022-29901 CVE-2022-32149 CVE-2022-37434 CVE-2022-40674 1.6. Logging 5.5.3 This release includes OpenShift Logging Bug Fix Release 5.5.3 . 1.6.1. Bug fixes Before this update, log entries that had structured messages included the original message field, which made the entry larger. This update removes the message field for structured logs to reduce the increased size. ( LOG-2759 ) Before this update, the collector configuration excluded logs from collector , default-log-store , and visualization pods, but was unable to exclude logs archived in a .gz file. With this update, archived logs stored as .gz files of collector , default-log-store , and visualization pods are also excluded. ( LOG-2844 ) Before this update, when requests to an unavailable pod were sent through the gateway, no alert would warn of the disruption. With this update, individual alerts will generate if the gateway has issues completing a write or read request. ( LOG-2884 ) Before this update, pod metadata could be altered by fluent plugins because the values passed through the pipeline by reference. This update ensures each log message receives a copy of the pod metadata so each message processes independently. ( LOG-3046 ) Before this update, selecting unknown severity in the OpenShift Console Logs view excluded logs with a level=unknown value. With this update, logs without level and with level=unknown values are visible when filtering by unknown severity. ( LOG-3062 ) Before this update, log records sent to Elasticsearch had an extra field named write-index that contained the name of the index to which the logs needed to be sent. This field is not a part of the data model. After this update, this field is no longer sent. ( LOG-3075 ) With the introduction of the new built-in Pod Security Admission Controller , Pods not configured in accordance with the enforced security standards defined globally or on the namespace level cannot run. With this update, the Operator and collectors allow privileged execution and run without security audit warnings or errors. ( LOG-3077 ) Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. ( LOG-3095 ) 1.6.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-2526 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.7. Logging 5.5.2 This release includes OpenShift Logging Bug Fix Release 5.5.2 . 1.7.1. Bug fixes Before this update, alerting rules for the Fluentd collector did not adhere to the OpenShift Container Platform monitoring style guidelines. This update modifies those alerts to include the namespace label, resolving the issue. ( LOG-1823 ) Before this update, the index management rollover script failed to generate a new index name whenever there was more than one hyphen character in the name of the index. With this update, index names generate correctly. ( LOG-2644 ) Before this update, the Kibana route was setting a caCertificate value without a certificate present. With this update, no caCertificate value is set. ( LOG-2661 ) Before this update, a change in the collector dependencies caused it to issue a warning message for unused parameters. With this update, removing unused configuration parameters resolves the issue. ( LOG-2859 ) Before this update, pods created for deployments that Loki Operator created were mistakenly scheduled on nodes with non-Linux operating systems, if such nodes were available in the cluster the Operator was running in. With this update, the Operator attaches an additional node-selector to the pod definitions which only allows scheduling the pods on Linux-based nodes. ( LOG-2895 ) Before this update, the OpenShift Console Logs view did not filter logs by severity due to a LogQL parser issue in the LokiStack gateway. With this update, a parser fix resolves the issue and the OpenShift Console Logs view can filter by severity. ( LOG-2908 ) Before this update, a refactoring of the Fluentd collector plugins removed the timestamp field for events. This update restores the timestamp field, sourced from the event's received time. ( LOG-2923 ) Before this update, absence of a level field in audit logs caused an error in vector logs. With this update, the addition of a level field in the audit log record resolves the issue. ( LOG-2961 ) Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-3053 ) Before this update, each rollover job created empty indices when the ClusterLogForwarder custom resource had JSON parsing defined. With this update, new indices are not empty. ( LOG-3063 ) Before this update, when the user deleted the LokiStack after an update to Loki Operator 5.5 resources originally created by Loki Operator 5.4 remained. With this update, the resources' owner-references point to the 5.5 LokiStack. ( LOG-2945 ) Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. ( LOG-2918 ) Before this update, users with cluster-admin privileges were not able to properly view infrastructure and audit logs using the logging console. With this update, the authorization check has been extended to also recognize users in cluster-admin and dedicated-admin groups as admins. ( LOG-2970 ) 1.7.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.8. Logging 5.5.1 This release includes OpenShift Logging Bug Fix Release 5.5.1 . 1.8.1. Enhancements This enhancement adds an Aggregated Logs tab to the Pod Details page of the OpenShift Container Platform web console when the Logging Console Plugin is in use. This enhancement is only available on OpenShift Container Platform 4.10 and later. ( LOG-2647 ) This enhancement adds Google Cloud Logging as an output option for log forwarding. ( LOG-1482 ) 1.8.2. Bug fixes Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. ( LOG-2745 ) Before this update, Fluentd would sometimes not recognize that the Kubernetes platform rotated the log file and would no longer read log messages. This update corrects that by setting the configuration parameter suggested by the upstream development team. ( LOG-2995 ) Before this update, the addition of multi-line error detection caused internal routing to change and forward records to the wrong destination. With this update, the internal routing is correct. ( LOG-2801 ) Before this update, changing the OpenShift Container Platform web console's refresh interval created an error when the Query field was empty. With this update, changing the interval is not an available option when the Query field is empty. ( LOG-2917 ) 1.8.3. CVEs CVE-2022-1705 CVE-2022-2526 CVE-2022-29154 CVE-2022-30631 CVE-2022-32148 CVE-2022-32206 CVE-2022-32208 1.9. Logging 5.5 The following advisories are available for Logging 5.5: Release 5.5 1.9.1. Enhancements With this update, you can forward structured logs from different containers within the same pod to different indices. To use this feature, you must configure the pipeline with multi-container support and annotate the pods. ( LOG-1296 ) Important JSON formatting of logs varies by application. Because creating too many indices impacts performance, limit your use of this feature to creating indices for logs that have incompatible JSON formats. Use queries to separate logs from different namespaces, or applications with compatible JSON formats. With this update, you can filter logs with Elasticsearch outputs by using the Kubernetes common labels, app.kubernetes.io/component , app.kubernetes.io/managed-by , app.kubernetes.io/part-of , and app.kubernetes.io/version . Non-Elasticsearch output types can use all labels included in kubernetes.labels . ( LOG-2388 ) With this update, clusters with AWS Security Token Service (STS) enabled may use STS authentication to forward logs to Amazon CloudWatch. ( LOG-1976 ) With this update, the 'LokiOperator' Operator and Vector collector move from Technical Preview to General Availability. Full feature parity with prior releases are pending, and some APIs remain Technical Previews. See the Logging with the LokiStack section for details. 1.9.2. Bug fixes Before this update, clusters configured to forward logs to Amazon CloudWatch wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for all storage options has been disabled, resolving the issue. ( LOG-2746 ) Before this update, the Operator was using versions of some APIs that are deprecated and planned for removal in future versions of OpenShift Container Platform. This update moves dependencies to the supported API versions. ( LOG-2656 ) Before this update, the Operator was using versions of some APIs that are deprecated and planned for removal in future versions of OpenShift Container Platform. This update moves dependencies to the supported API versions. ( LOG-2656 ) Before this update, multiple ClusterLogForwarder pipelines configured for multiline error detection caused the collector to go into a crashloopbackoff error state. This update fixes the issue where multiple configuration sections had the same unique ID. ( LOG-2241 ) Before this update, the collector could not save non UTF-8 symbols to the Elasticsearch storage logs. With this update the collector encodes non UTF-8 symbols, resolving the issue. ( LOG-2203 ) Before this update, non-latin characters displayed incorrectly in Kibana. With this update, Kibana displays all valid UTF-8 symbols correctly. ( LOG-2784 ) 1.9.3. CVEs CVE-2021-38561 CVE-2022-1012 CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-21698 CVE-2022-30631 CVE-2022-32250 1.10. Logging 5.4.14 This release includes OpenShift Logging Bug Fix Release 5.4.14 . 1.10.1. Bug fixes None. 1.10.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0361 CVE-2023-23916 1.11. Logging 5.4.13 This release includes OpenShift Logging Bug Fix Release 5.4.13 . 1.11.1. Bug fixes Before this update, a problem with the Fluentd collector caused it to not capture OAuth login events stored in /var/log/auth-server/audit.log . This led to incomplete collection of login events from the OAuth service. With this update, the Fluentd collector now resolves this issue by capturing all login events from the OAuth service, including those stored in /var/log/auth-server/audit.log , as expected. ( LOG-3731 ) 1.11.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0767 CVE-2023-23916 1.12. Logging 5.4.12 This release includes OpenShift Logging Bug Fix Release 5.4.12 . 1.12.1. Bug fixes None. 1.12.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-40897 CVE-2022-41222 CVE-2022-41717 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 1.13. Logging 5.4.11 This release includes OpenShift Logging Bug Fix Release 5.4.11 . 1.13.1. Bug fixes BZ 2099524 BZ 2161274 1.13.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.14. Logging 5.4.10 This release includes OpenShift Logging Bug Fix Release 5.4.10 . 1.14.1. Bug fixes None. 1.14.2. CVEs CVE-2021-46848 CVE-2022-2056 CVE-2022-2057 CVE-2022-2058 CVE-2022-2519 CVE-2022-2520 CVE-2022-2521 CVE-2022-2867 CVE-2022-2868 CVE-2022-2869 CVE-2022-2953 CVE-2022-2964 CVE-2022-4139 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 1.15. Logging 5.4.9 This release includes OpenShift Logging Bug Fix Release 5.4.9 . 1.15.1. Bug fixes Before this update, the Fluentd collector would warn of unused configuration parameters. This update removes those configuration parameters and their warning messages. ( LOG-3074 ) Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3306 ) 1.15.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36516 CVE-2020-36558 CVE-2021-3640 CVE-2021-30002 CVE-2022-0168 CVE-2022-0561 CVE-2022-0562 CVE-2022-0617 CVE-2022-0854 CVE-2022-0865 CVE-2022-0891 CVE-2022-0908 CVE-2022-0909 CVE-2022-0924 CVE-2022-1016 CVE-2022-1048 CVE-2022-1055 CVE-2022-1184 CVE-2022-1292 CVE-2022-1304 CVE-2022-1355 CVE-2022-1586 CVE-2022-1785 CVE-2022-1852 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2078 CVE-2022-2097 CVE-2022-2509 CVE-2022-2586 CVE-2022-2639 CVE-2022-2938 CVE-2022-3515 CVE-2022-20368 CVE-2022-21499 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-22844 CVE-2022-23960 CVE-2022-24448 CVE-2022-25255 CVE-2022-26373 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-27950 CVE-2022-28390 CVE-2022-28893 CVE-2022-29581 CVE-2022-30293 CVE-2022-34903 CVE-2022-36946 CVE-2022-37434 CVE-2022-39399 1.16. Logging 5.4.8 This release includes RHSA-2022:7435-OpenShift Logging Bug Fix Release 5.4.8 . 1.16.1. Bug fixes None. 1.16.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36518 CVE-2022-1304 CVE-2022-2509 CVE-2022-3515 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-30293 CVE-2022-32149 CVE-2022-37434 CVE-2022-40674 CVE-2022-42003 CVE-2022-42004 1.17. Logging 5.4.6 This release includes OpenShift Logging Bug Fix Release 5.4.6 . 1.17.1. Bug fixes Before this update, Fluentd would sometimes not recognize that the Kubernetes platform rotated the log file and would no longer read log messages. This update corrects that by setting the configuration parameter suggested by the upstream development team. ( LOG-2792 ) Before this update, each rollover job created empty indices when the ClusterLogForwarder custom resource had JSON parsing defined. With this update, new indices are not empty. ( LOG-2823 ) Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-3054 ) 1.17.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.18. Logging 5.4.5 This release includes RHSA-2022:6183-OpenShift Logging Bug Fix Release 5.4.5 . 1.18.1. Bug fixes Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. ( LOG-2881 ) Before this update, the addition of multi-line error detection caused internal routing to change and forward records to the wrong destination. With this update, the internal routing is correct. ( LOG-2946 ) Before this update, the Operator could not decode index setting JSON responses with a quoted Boolean value and would result in an error. With this update, the Operator can properly decode this JSON response. ( LOG-3009 ) Before this update, Elasticsearch index templates defined the fields for labels with the wrong types. This change updates those templates to match the expected types forwarded by the log collector. ( LOG-2972 ) 1.18.2. CVEs CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-30631 1.19. Logging 5.4.4 This release includes RHBA-2022:5907-OpenShift Logging Bug Fix Release 5.4.4 . 1.19.1. Bug fixes Before this update, non-latin characters displayed incorrectly in Elasticsearch. With this update, Elasticsearch displays all valid UTF-8 symbols correctly. ( LOG-2794 ) Before this update, non-latin characters displayed incorrectly in Fluentd. With this update, Fluentd displays all valid UTF-8 symbols correctly. ( LOG-2657 ) Before this update, the metrics server for the collector attempted to bind to the address using a value exposed by an environment value. This change modifies the configuration to bind to any available interface. ( LOG-2821 ) Before this update, the cluster-logging Operator relied on the cluster to create a secret. This cluster behavior changed in OpenShift Container Platform 4.11, which caused logging deployments to fail. With this update, the cluster-logging Operator resolves the issue by creating the secret if needed. ( LOG-2840 ) 1.19.2. CVEs CVE-2022-21540 CVE-2022-21541 CVE-2022-34169 1.20. Logging 5.4.3 This release includes RHSA-2022:5556-OpenShift Logging Bug Fix Release 5.4.3 . 1.20.1. Elasticsearch Operator deprecation notice In logging subsystem 5.4.3 the Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 1.20.2. Bug fixes Before this update, the OpenShift Logging Dashboard showed the number of active primary shards instead of all active shards. With this update, the dashboard displays all active shards. ( LOG-2781 ) Before this update, a bug in a library used by elasticsearch-operator contained a denial of service attack vulnerability. With this update, the library has been updated to a version that does not contain this vulnerability. ( LOG-2816 ) Before this update, when configuring Vector to forward logs to Loki, it was not possible to set a custom bearer token or use the default token if Loki had TLS enabled. With this update, Vector can forward logs to Loki using tokens with TLS enabled. ( LOG-2786 Before this update, the ElasticSearch Operator omitted the referencePolicy property of the ImageStream custom resource when selecting an oauth-proxy image. This omission caused the Kibana deployment to fail in specific environments. With this update, using referencePolicy resolves the issue, and the Operator can deploy Kibana successfully. ( LOG-2791 ) Before this update, alerting rules for the ClusterLogForwarder custom resource did not take multiple forward outputs into account. This update resolves the issue. ( LOG-2640 ) Before this update, clusters configured to forward logs to Amazon CloudWatch wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for CloudWatch has been disabled, resolving the issue. ( LOG-2768 ) 1.20.3. CVEs Example 1.1. Click to expand CVEs CVE-2020-28915 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-27666 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 1.21. Logging 5.4.2 This release includes RHBA-2022:4874-OpenShift Logging Bug Fix Release 5.4.2 1.21.1. Bug fixes Before this update, editing the Collector configuration using oc edit was difficult because it had inconsistent use of white-space. This change introduces logic to normalize and format the configuration prior to any updates by the Operator so that it is easy to edit using oc edit . ( LOG-2319 ) Before this update, the FluentdNodeDown alert could not provide instance labels in the message section appropriately. This update resolves the issue by fixing the alert rule to provide instance labels in cases of partial instance failures. ( LOG-2607 ) Before this update, several log levels, such as`critical`, that were documented as supported by the product were not. This update fixes the discrepancy so the documented log levels are now supported by the product. ( LOG-2033 ) 1.21.2. CVEs Example 1.2. Click to expand CVEs CVE-2018-25032 CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 CVE-2020-19131 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3634 CVE-2021-3669 CVE-2021-3737 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4189 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-21781 CVE-2021-23222 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41617 CVE-2021-41864 CVE-2021-42739 CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 CVE-2022-1271 1.22. Logging 5.4.1 This release includes RHSA-2022:2216-OpenShift Logging Bug Fix Release 5.4.1 . 1.22.1. Bug fixes Before this update, the log file metric exporter only reported logs created while the exporter was running, which resulted in inaccurate log growth data. This update resolves this issue by monitoring /var/log/pods . ( LOG-2442 ) Before this update, the collector would be blocked because it continually tried to use a stale connection when forwarding logs to fluentd forward receivers. With this release, the keepalive_timeout value has been set to 30 seconds ( 30s ) so that the collector recycles the connection and re-attempts to send failed messages within a reasonable amount of time. ( LOG-2534 ) Before this update, an error in the gateway component enforcing tenancy for reading logs limited access to logs with a Kubernetes namespace causing "audit" and some "infrastructure" logs to be unreadable. With this update, the proxy correctly detects users with admin access and allows access to logs without a namespace. ( LOG-2448 ) Before this update, the system:serviceaccount:openshift-monitoring:prometheus-k8s service account had cluster level privileges as a clusterrole and clusterrolebinding . This update restricts the service account` to the openshift-logging namespace with a role and rolebinding. ( LOG-2437 ) Before this update, Linux audit log time parsing relied on an ordinal position of a key/value pair. This update changes the parsing to use a regular expression to find the time entry. ( LOG-2321 ) 1.22.2. CVEs Example 1.3. Click to expand CVEs CVE-2018-25032 CVE-2021-4028 CVE-2021-37136 CVE-2021-37137 CVE-2021-43797 CVE-2022-0778 CVE-2022-1154 CVE-2022-1271 CVE-2022-21426 CVE-2022-21434 CVE-2022-21443 CVE-2022-21476 CVE-2022-21496 CVE-2022-21698 CVE-2022-25636 1.23. Logging 5.4 The following advisories are available for logging 5.4: Logging subsystem for Red Hat OpenShift Release 5.4 1.23.1. Technology Previews Important Vector is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.23.2. About Vector Vector is a log collector offered as a tech-preview alternative to the current default collector for the logging subsystem. The following outputs are supported: elasticsearch . An external Elasticsearch instance. The elasticsearch output can use a TLS connection. kafka . A Kafka broker. The kafka output can use an unsecured or TLS connection. loki . Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. 1.23.2.1. Enabling Vector Vector is not enabled by default. Use the following steps to enable Vector on your OpenShift Container Platform cluster. Important Vector does not support FIPS Enabled Clusters. Prerequisites OpenShift Container Platform: 4.10 Logging subsystem for Red Hat OpenShift: 5.4 FIPS disabled Procedure Edit the ClusterLogging custom resource (CR) in the openshift-logging project: USD oc -n openshift-logging edit ClusterLogging instance Add a logging.openshift.io/preview-vector-collector: enabled annotation to the ClusterLogging custom resource (CR). Add vector as a collection type to the ClusterLogging custom resource (CR). apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: "openshift-logging" annotations: logging.openshift.io/preview-vector-collector: enabled spec: collection: logs: type: "vector" vector: {} Additional resources Vector Documentation Important Loki Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.23.3. About Loki Loki is a horizontally scalable, highly available, multi-tenant log aggregation system currently offered as an alternative to Elasticsearch as a log store for the logging subsystem. Additional resources Loki Documentation 1.23.3.1. Deploying the Lokistack You can use the OpenShift Container Platform web console to install the LokiOperator. Prerequisites OpenShift Container Platform: 4.10 Logging subsystem for Red Hat OpenShift: 5.4 To install the LokiOperator using the OpenShift Container Platform web console: Install the LokiOperator: In the OpenShift Container Platform web console, click Operators OperatorHub . Choose LokiOperator from the list of available Operators, and click Install . Under Installation Mode , select All namespaces on the cluster . Under Installed Namespace , select openshift-operators-redhat . You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, which would cause conflicts. Select Enable operator recommended cluster monitoring on this namespace . This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . Verify that you installed the LokiOperator. Visit the Operators Installed Operators page and look for "LokiOperator." Ensure that LokiOperator is listed in all the projects whose Status is Succeeded . 1.23.4. Bug fixes Before this update, the cluster-logging-operator used cluster scoped roles and bindings to establish permissions for the Prometheus service account to scrape metrics. These permissions were created when deploying the Operator using the console interface but were missing when deploying from the command line. This update fixes the issue by making the roles and bindings namespace-scoped. ( LOG-2286 ) Before this update, a prior change to fix dashboard reconciliation introduced a ownerReferences field to the resource across namespaces. As a result, both the config map and dashboard were not created in the namespace. With this update, the removal of the ownerReferences field resolves the issue, and the OpenShift Logging dashboard is available in the console. ( LOG-2163 ) Before this update, changes to the metrics dashboards did not deploy because the cluster-logging-operator did not correctly compare existing and modified config maps that contain the dashboard. With this update, the addition of a unique hash value to object labels resolves the issue. ( LOG-2071 ) Before this update, the OpenShift Logging dashboard did not correctly display the pods and namespaces in the table, which displays the top producing containers collected over the last 24 hours. With this update, the pods and namespaces are displayed correctly. ( LOG-2069 ) Before this update, when the ClusterLogForwarder was set up with Elasticsearch OutputDefault and Elasticsearch outputs did not have structured keys, the generated configuration contained the incorrect values for authentication. This update corrects the secret and certificates used. ( LOG-2056 ) Before this update, the OpenShift Logging dashboard displayed an empty CPU graph because of a reference to an invalid metric. With this update, the correct data point has been selected, resolving the issue. ( LOG-2026 ) Before this update, the Fluentd container image included builder tools that were unnecessary at run time. This update removes those tools from the image.( LOG-1927 ) Before this update, a name change of the deployed collector in the 5.3 release caused the logging collector to generate the FluentdNodeDown alert. This update resolves the issue by fixing the job name for the Prometheus alert. ( LOG-1918 ) Before this update, the log collector was collecting its own logs due to a refactoring of the component name change. This lead to a potential feedback loop of the collector processing its own log that might result in memory and log message size issues. This update resolves the issue by excluding the collector logs from the collection. ( LOG-1774 ) Before this update, Elasticsearch generated the error Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota. if the PVC already existed. With this update, Elasticsearch checks for existing PVCs, resolving the issue. ( LOG-2131 ) Before this update, Elasticsearch was unable to return to the ready state when the elasticsearch-signing secret was removed. With this update, Elasticsearch is able to go back to the ready state after that secret is removed. ( LOG-2171 ) Before this update, the change of the path from which the collector reads container logs caused the collector to forward some records to the wrong indices. With this update, the collector now uses the correct configuration to resolve the issue. ( LOG-2160 ) Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. ( LOG-1899 ) Before this update, the OpenShift Container Platform Logging dashboard showed the number of shards 'x' times larger than the actual value when Elasticsearch had 'x' nodes. This issue occurred because it was printing all primary shards for each Elasticsearch pod and calculating a sum on it, although the output was always for the whole Elasticsearch cluster. With this update, the number of shards is now correctly calculated. ( LOG-2156 ) Before this update, the secrets kibana and kibana-proxy were not recreated if they were deleted manually. With this update, the elasticsearch-operator will watch the resources and automatically recreate them if deleted. ( LOG-2250 ) Before this update, tuning the buffer chunk size could cause the collector to generate a warning about the chunk size exceeding the byte limit for the event stream. With this update, you can also tune the read line limit, resolving the issue. ( LOG-2379 ) Before this update, the logging console link in OpenShift web console was not removed with the ClusterLogging CR. With this update, deleting the CR or uninstalling the Cluster Logging Operator removes the link. ( LOG-2373 ) Before this update, a change to the container logs path caused the collection metric to always be zero with older releases configured with the original path. With this update, the plugin which exposes metrics about collected logs supports reading from either path to resolve the issue. ( LOG-2462 ) 1.23.5. CVEs CVE-2022-0759 BZ-2058404 CVE-2022-21698 BZ-2045880 1.24. Logging 5.3.14 This release includes OpenShift Logging Bug Fix Release 5.3.14 . 1.24.1. Bug fixes Before this update, the log file size map generated by the log-file-metrics-exporter component did not remove entries for deleted files, resulting in increased file size, and process memory. With this update, the log file size map does not contain entries for deleted files. ( LOG-3293 ) 1.24.2. CVEs CVE-2016-3709 CVE-2020-35525 CVE-2020-35527 CVE-2020-36516 CVE-2020-36558 CVE-2021-3640 CVE-2021-30002 CVE-2022-0168 CVE-2022-0561 CVE-2022-0562 CVE-2022-0617 CVE-2022-0854 CVE-2022-0865 CVE-2022-0891 CVE-2022-0908 CVE-2022-0909 CVE-2022-0924 CVE-2022-1016 CVE-2022-1048 CVE-2022-1055 CVE-2022-1184 CVE-2022-1292 CVE-2022-1304 CVE-2022-1355 CVE-2022-1586 CVE-2022-1785 CVE-2022-1852 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2078 CVE-2022-2097 CVE-2022-2509 CVE-2022-2586 CVE-2022-2639 CVE-2022-2938 CVE-2022-3515 CVE-2022-20368 CVE-2022-21499 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-22624 CVE-2022-22628 CVE-2022-22629 CVE-2022-22662 CVE-2022-22844 CVE-2022-23960 CVE-2022-24448 CVE-2022-25255 CVE-2022-26373 CVE-2022-26700 CVE-2022-26709 CVE-2022-26710 CVE-2022-26716 CVE-2022-26717 CVE-2022-26719 CVE-2022-27404 CVE-2022-27405 CVE-2022-27406 CVE-2022-27950 CVE-2022-28390 CVE-2022-28893 CVE-2022-29581 CVE-2022-30293 CVE-2022-34903 CVE-2022-36946 CVE-2022-37434 CVE-2022-39399 CVE-2022-42898 1.25. Logging 5.3.13 This release includes RHSA-2022:68828-OpenShift Logging Bug Fix Release 5.3.13 . 1.25.1. Bug fixes None. 1.25.2. CVEs Example 1.4. Click to expand CVEs CVE-2020-35525 CVE-2020-35527 CVE-2022-0494 CVE-2022-1353 CVE-2022-2509 CVE-2022-2588 CVE-2022-3515 CVE-2022-21618 CVE-2022-21619 CVE-2022-21624 CVE-2022-21626 CVE-2022-21628 CVE-2022-23816 CVE-2022-23825 CVE-2022-29900 CVE-2022-29901 CVE-2022-32149 CVE-2022-37434 CVE-2022-39399 CVE-2022-40674 1.26. Logging 5.3.12 This release includes OpenShift Logging Bug Fix Release 5.3.12 . 1.26.1. Bug fixes None. 1.26.2. CVEs CVE-2015-20107 CVE-2022-0391 CVE-2022-21123 CVE-2022-21125 CVE-2022-21166 CVE-2022-29154 CVE-2022-32206 CVE-2022-32208 CVE-2022-34903 1.27. Logging 5.3.11 This release includes OpenShift Logging Bug Fix Release 5.3.11 . 1.27.1. Bug fixes Before this update, the Operator did not ensure that the pod was ready, which caused the cluster to reach an inoperable state during a cluster restart. With this update, the Operator marks new pods as ready before continuing to a new pod during a restart, which resolves the issue. ( LOG-2871 ) 1.27.2. CVEs CVE-2022-1292 CVE-2022-1586 CVE-2022-1785 CVE-2022-1897 CVE-2022-1927 CVE-2022-2068 CVE-2022-2097 CVE-2022-30631 1.28. Logging 5.3.10 This release includes RHSA-2022:5908-OpenShift Logging Bug Fix Release 5.3.10 . 1.28.1. Bug fixes BZ-2100495 1.28.2. CVEs Example 1.5. Click to expand CVEs CVE-2021-38561 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-21540 CVE-2022-21541 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 CVE-2022-34169 1.29. Logging 5.3.9 This release includes RHBA-2022:5557-OpenShift Logging Bug Fix Release 5.3.9 . 1.29.1. Bug fixes Before this update, the logging collector included a path as a label for the metrics it produced. This path changed frequently and contributed to significant storage changes for the Prometheus server. With this update, the label has been dropped to resolve the issue and reduce storage consumption. ( LOG-2682 ) 1.29.2. CVEs Example 1.6. Click to expand CVEs CVE-2020-28915 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-27666 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 1.30. Logging 5.3.8 This release includes RHBA-2022:5010-OpenShift Logging Bug Fix Release 5.3.8 1.30.1. Bug fixes (None.) 1.30.2. CVEs Example 1.7. Click to expand CVEs CVE-2018-25032 CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 CVE-2020-19131 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3634 CVE-2021-3669 CVE-2021-3737 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4189 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-21781 CVE-2021-23222 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41617 CVE-2021-41864 CVE-2021-42739 CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 CVE-2022-1271 1.31. OpenShift Logging 5.3.7 This release includes RHSA-2022:2217 OpenShift Logging Bug Fix Release 5.3.7 1.31.1. Bug fixes Before this update, Linux audit log time parsing relied on an ordinal position of key/value pair. This update changes the parsing to utilize a regex to find the time entry. ( LOG-2322 ) Before this update, some log forwarder outputs could re-order logs with the same time-stamp. With this update, a sequence number has been added to the log record to order entries that have matching timestamps. ( LOG-2334 ) Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. ( LOG-2450 ) Before this update, system:serviceaccount:openshift-monitoring:prometheus-k8s had cluster level privileges as a clusterrole and clusterrolebinding . This update restricts the serviceaccount to the openshift-logging namespace with a role and rolebinding. ( LOG-2481) ) 1.31.2. CVEs Example 1.8. Click to expand CVEs CVE-2018-25032 CVE-2021-4028 CVE-2021-37136 CVE-2021-37137 CVE-2021-43797 CVE-2022-0759 CVE-2022-0778 CVE-2022-1154 CVE-2022-1271 CVE-2022-21426 CVE-2022-21434 CVE-2022-21443 CVE-2022-21476 CVE-2022-21496 CVE-2022-21698 CVE-2022-25636 1.32. OpenShift Logging 5.3.6 This release includes RHBA-2022:1377 OpenShift Logging Bug Fix Release 5.3.6 1.32.1. Bug fixes Before this update, defining a toleration with no key and the existing Operator caused the Operator to be unable to complete an upgrade. With this update, this toleration no longer blocks the upgrade from completing. ( LOG-2126 ) Before this change, it was possible for the collector to generate a warning where the chunk byte limit was exceeding an emitted event. With this change, you can tune the readline limit to resolve the issue as advised by the upstream documentation. ( LOG-2380 ) 1.33. OpenShift Logging 5.3.5 This release includes RHSA-2022:0721 OpenShift Logging Bug Fix Release 5.3.5 1.33.1. Bug fixes Before this update, if you removed OpenShift Logging from OpenShift Container Platform, the web console continued displaying a link to the Logging page. With this update, removing or uninstalling OpenShift Logging also removes that link. ( LOG-2182 ) 1.33.2. CVEs Example 1.9. Click to expand CVEs CVE-2020-28491 CVE-2021-3521 CVE-2021-3872 CVE-2021-3984 CVE-2021-4019 CVE-2021-4122 CVE-2021-4192 CVE-2021-4193 CVE-2022-0552 1.34. OpenShift Logging 5.3.4 This release includes RHBA-2022:0411 OpenShift Logging Bug Fix Release 5.3.4 1.34.1. Bug fixes Before this update, changes to the metrics dashboards had not yet been deployed because the cluster-logging-operator did not correctly compare existing and desired config maps that contained the dashboard. This update fixes the logic by adding a unique hash value to the object labels. ( LOG-2066 ) Before this update, Elasticsearch pods failed to start after updating with FIPS enabled. With this update, Elasticsearch pods start successfully. ( LOG-1974 ) Before this update, elasticsearch generated the error "Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota." if the PVC already existed. With this update, elasticsearch checks for existing PVCs, resolving the issue. ( LOG-2127 ) 1.34.2. CVEs Example 1.10. Click to expand CVEs CVE-2021-3521 CVE-2021-3872 CVE-2021-3984 CVE-2021-4019 CVE-2021-4122 CVE-2021-4155 CVE-2021-4192 CVE-2021-4193 CVE-2022-0185 CVE-2022-21248 CVE-2022-21277 CVE-2022-21282 CVE-2022-21283 CVE-2022-21291 CVE-2022-21293 CVE-2022-21294 CVE-2022-21296 CVE-2022-21299 CVE-2022-21305 CVE-2022-21340 CVE-2022-21341 CVE-2022-21360 CVE-2022-21365 CVE-2022-21366 1.35. OpenShift Logging 5.3.3 This release includes RHSA-2022:0227 OpenShift Logging Bug Fix Release 5.3.3 1.35.1. Bug fixes Before this update, changes to the metrics dashboards had not yet been deployed because the cluster-logging-operator did not correctly compare existing and desired configmaps containing the dashboard. This update fixes the logic by adding a dashboard unique hash value to the object labels.( LOG-2066 ) This update changes the log4j dependency to 2.17.1 to resolve CVE-2021-44832 .( LOG-2102 ) 1.35.2. CVEs Example 1.11. Click to expand CVEs CVE-2021-27292 BZ-1940613 CVE-2021-44832 BZ-2035951 1.36. OpenShift Logging 5.3.2 This release includes RHSA-2022:0044 OpenShift Logging Bug Fix Release 5.3.2 1.36.1. Bug fixes Before this update, Elasticsearch rejected logs from the Event Router due to a parsing error. This update changes the data model to resolve the parsing error. However, as a result, indices might cause warnings or errors within Kibana. The kubernetes.event.metadata.resourceVersion field causes errors until existing indices are removed or reindexed. If this field is not used in Kibana, you can ignore the error messages. If you have a retention policy that deletes old indices, the policy eventually removes the old indices and stops the error messages. Otherwise, manually reindex to stop the error messages. ( LOG-2087 ) Before this update, the OpenShift Logging Dashboard displayed the wrong pod namespace in the table that displays top producing and collected containers over the last 24 hours. With this update, the OpenShift Logging Dashboard displays the correct pod namespace. ( LOG-2051 ) Before this update, if outputDefaults.elasticsearch.structuredTypeKey in the ClusterLogForwarder custom resource (CR) instance did not have a structured key, the CR replaced the output secret with the default secret used to communicate to the default log store. With this update, the defined output secret is correctly used. ( LOG-2046 ) 1.36.2. CVEs Example 1.12. Click to expand CVEs CVE-2020-36327 BZ-1958999 CVE-2021-45105 BZ-2034067 CVE-2021-3712 CVE-2021-20321 CVE-2021-42574 1.37. OpenShift Logging 5.3.1 This release includes RHSA-2021:5129 OpenShift Logging Bug Fix Release 5.3.1 1.37.1. Bug fixes Before this update, the Fluentd container image included builder tools that were unnecessary at run time. This update removes those tools from the image. ( LOG-1998 ) Before this update, the Logging dashboard displayed an empty CPU graph because of a reference to an invalid metric. With this update, the Logging dashboard displays CPU graphs correctly. ( LOG-1925 ) Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. ( LOG-1897 ) 1.37.2. CVEs Example 1.13. Click to expand CVEs CVE-2021-21409 BZ-1944888 CVE-2021-37136 BZ-2004133 CVE-2021-37137 BZ-2004135 CVE-2021-44228 BZ-2030932 CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14145 CVE-2020-14155 CVE-2020-16135 CVE-2020-17541 CVE-2020-24370 CVE-2020-35521 CVE-2020-35522 CVE-2020-35523 CVE-2020-35524 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3200 CVE-2021-3426 CVE-2021-3445 CVE-2021-3481 CVE-2021-3572 CVE-2021-3580 CVE-2021-3712 CVE-2021-3800 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-20317 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-27645 CVE-2021-28153 CVE-2021-31535 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-42574 CVE-2021-43267 CVE-2021-43527 CVE-2021-45046 1.38. OpenShift Logging 5.3.0 This release includes RHSA-2021:4627 OpenShift Logging Bug Fix Release 5.3.0 1.38.1. New features and enhancements With this update, authorization options for Log Forwarding have been expanded. Outputs may now be configured with SASL, username/password, or TLS. 1.38.2. Bug fixes Before this update, if you forwarded logs using the syslog protocol, serializing a ruby hash encoded key/value pairs to contain a '⇒' character and replaced tabs with "#11". This update fixes the issue so that log messages are correctly serialized as valid JSON. ( LOG-1494 ) Before this update, application logs were not correctly configured to forward to the proper Cloudwatch stream with multi-line error detection enabled. ( LOG-1939 ) Before this update, a name change of the deployed collector in the 5.3 release caused the alert 'fluentnodedown' to generate. ( LOG-1918 ) Before this update, a regression introduced in a prior release configuration caused the collector to flush its buffered messages before shutdown, creating a delay the termination and restart of collector Pods. With this update, fluentd no longer flushes buffers at shutdown, resolving the issue. ( LOG-1735 ) Before this update, a regression introduced in a prior release intentionally disabled JSON message parsing. This update re-enables JSON parsing. It also sets the log entry "level" based on the "level" field in parsed JSON message or by using regex to extract a match from a message field. ( LOG-1199 ) Before this update, the ClusterLogging custom resource (CR) applied the value of the totalLimitSize field to the Fluentd total_limit_size field, even if the required buffer space was not available. With this update, the CR applies the lesser of the two totalLimitSize or 'default' values to the Fluentd total_limit_size field, resolving the issue. ( LOG-1776 ) 1.38.3. Known issues If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the Fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. ( LOG-1652 ) As a workaround, if you change the secret, you can force the Fluentd pods to redeploy by entering: USD oc delete pod -l component=collector 1.38.4. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 1.38.4.1. Forwarding logs using the legacy Fluentd and legacy syslog methods have been removed In OpenShift Logging 5.3, the legacy methods of forwarding logs to Syslog and Fluentd are removed. Bug fixes and support are provided through the end of the OpenShift Logging 5.2 life cycle. After which, no new feature enhancements are made. Instead, use the following non-legacy methods: Forwarding logs using the Fluentd forward protocol Forwarding logs using the syslog protocol 1.38.4.2. Configuration mechanisms for legacy forwarding methods have been removed In OpenShift Logging 5.3, the legacy configuration mechanism for log forwarding is removed: You cannot forward logs using the legacy Fluentd method and legacy Syslog method. Use the standard log forwarding methods instead. 1.38.5. CVEs Example 1.14. Click to expand CVEs CVE-2018-20673 CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-14615 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-0427 CVE-2020-10001 CVE-2020-12762 CVE-2020-13435 CVE-2020-14145 CVE-2020-14155 CVE-2020-16135 CVE-2020-17541 CVE-2020-24370 CVE-2020-24502 CVE-2020-24503 CVE-2020-24504 CVE-2020-24586 CVE-2020-24587 CVE-2020-24588 CVE-2020-26139 CVE-2020-26140 CVE-2020-26141 CVE-2020-26143 CVE-2020-26144 CVE-2020-26145 CVE-2020-26146 CVE-2020-26147 CVE-2020-27777 CVE-2020-29368 CVE-2020-29660 CVE-2020-35448 CVE-2020-35521 CVE-2020-35522 CVE-2020-35523 CVE-2020-35524 CVE-2020-36158 CVE-2020-36312 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2020-36386 CVE-2021-0129 CVE-2021-3200 CVE-2021-3348 CVE-2021-3426 CVE-2021-3445 CVE-2021-3481 CVE-2021-3487 CVE-2021-3489 CVE-2021-3564 CVE-2021-3572 CVE-2021-3573 CVE-2021-3580 CVE-2021-3600 CVE-2021-3635 CVE-2021-3659 CVE-2021-3679 CVE-2021-3732 CVE-2021-3778 CVE-2021-3796 CVE-2021-3800 CVE-2021-20194 CVE-2021-20197 CVE-2021-20231 CVE-2021-20232 CVE-2021-20239 CVE-2021-20266 CVE-2021-20284 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23133 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-28950 CVE-2021-28971 CVE-2021-29155 lCVE-2021-29646 CVE-2021-29650 CVE-2021-31440 CVE-2021-31535 CVE-2021-31829 CVE-2021-31916 CVE-2021-33033 CVE-2021-33194 CVE-2021-33200 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-42574 1.39. Logging 5.2.13 This release includes RHSA-2022:5909-OpenShift Logging Bug Fix Release 5.2.13 . 1.39.1. Bug fixes BZ-2100495 1.39.2. CVEs Example 1.15. Click to expand CVEs CVE-2021-38561 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-21540 CVE-2022-21541 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 CVE-2022-34169 1.40. Logging 5.2.12 This release includes RHBA-2022:5558-OpenShift Logging Bug Fix Release 5.2.12 . 1.40.1. Bug fixes None. 1.40.2. CVEs Example 1.16. Click to expand CVEs CVE-2020-28915 CVE-2021-40528 CVE-2022-1271 CVE-2022-1621 CVE-2022-1629 CVE-2022-22576 CVE-2022-25313 CVE-2022-25314 CVE-2022-26691 CVE-2022-27666 CVE-2022-27774 CVE-2022-27776 CVE-2022-27782 CVE-2022-29824 1.41. Logging 5.2.11 This release includes RHBA-2022:5012-OpenShift Logging Bug Fix Release 5.2.11 1.41.1. Bug fixes Before this update, clusters configured to perform CloudWatch forwarding wrote rejected log files to temporary storage, causing cluster instability over time. With this update, chunk backup for CloudWatch has been disabled, resolving the issue. ( LOG-2635 ) 1.41.2. CVEs Example 1.17. Click to expand CVEs CVE-2018-25032 CVE-2020-0404 CVE-2020-4788 CVE-2020-13974 CVE-2020-19131 CVE-2020-27820 CVE-2021-0941 CVE-2021-3612 CVE-2021-3634 CVE-2021-3669 CVE-2021-3737 CVE-2021-3743 CVE-2021-3744 CVE-2021-3752 CVE-2021-3759 CVE-2021-3764 CVE-2021-3772 CVE-2021-3773 CVE-2021-4002 CVE-2021-4037 CVE-2021-4083 CVE-2021-4157 CVE-2021-4189 CVE-2021-4197 CVE-2021-4203 CVE-2021-20322 CVE-2021-21781 CVE-2021-23222 CVE-2021-26401 CVE-2021-29154 CVE-2021-37159 CVE-2021-41617 CVE-2021-41864 CVE-2021-42739 CVE-2021-43056 CVE-2021-43389 CVE-2021-43976 CVE-2021-44733 CVE-2021-45485 CVE-2021-45486 CVE-2022-0001 CVE-2022-0002 CVE-2022-0286 CVE-2022-0322 CVE-2022-1011 CVE-2022-1271 1.42. OpenShift Logging 5.2.10 This release includes OpenShift Logging Bug Fix Release 5.2.10 ] 1.42.1. Bug fixes Before this update some log forwarder outputs could re-order logs with the same time-stamp. With this update, a sequence number has been added to the log record to order entries that have matching timestamps.( LOG-2335 ) Before this update, clusters with a large number of namespaces caused Elasticsearch to stop serving requests because the list of namespaces reached the maximum header size limit. With this update, headers only include a list of namespace names, resolving the issue. ( LOG-2475 ) Before this update, system:serviceaccount:openshift-monitoring:prometheus-k8s had cluster level privileges as a clusterrole and clusterrolebinding . This update restricts the serviceaccount to the openshift-logging namespace with a role and rolebinding. ( LOG-2480 ) Before this update, the cluster-logging-operator utilized cluster scoped roles and bindings to establish permissions for the Prometheus service account to scrape metrics. These permissions were only created when deploying the Operator using the console interface and were missing when the Operator was deployed from the command line. This fixes the issue by making this role and binding namespace scoped. ( LOG-1972 ) 1.42.2. CVEs Example 1.18. Click to expand CVEs CVE-2018-25032 CVE-2021-4028 CVE-2021-37136 CVE-2021-37137 CVE-2021-43797 CVE-2022-0778 CVE-2022-1154 CVE-2022-1271 CVE-2022-21426 CVE-2022-21434 CVE-2022-21443 CVE-2022-21476 CVE-2022-21496 CVE-2022-21698 CVE-2022-25636 1.43. OpenShift Logging 5.2.9 This release includes RHBA-2022:1375 OpenShift Logging Bug Fix Release 5.2.9 ] 1.43.1. Bug fixes Before this update, defining a toleration with no key and the existing Operator caused the Operator to be unable to complete an upgrade. With this update, this toleration no longer blocks the upgrade from completing. ( LOG-2304 ) 1.44. OpenShift Logging 5.2.8 This release includes RHSA-2022:0728 OpenShift Logging Bug Fix Release 5.2.8 1.44.1. Bug fixes Before this update, if you removed OpenShift Logging from OpenShift Container Platform, the web console continued displaying a link to the Logging page. With this update, removing or uninstalling OpenShift Logging also removes that link. ( LOG-2180 ) 1.44.2. CVEs Example 1.19. Click to expand CVEs CVE-2020-28491 BZ-1930423 CVE-2022-0552 BG-2052539 1.45. OpenShift Logging 5.2.7 This release includes RHBA-2022:0478 OpenShift Logging Bug Fix Release 5.2.7 1.45.1. Bug fixes Before this update, Elasticsearch pods with FIPS enabled failed to start after updating. With this update, Elasticsearch pods start successfully. ( LOG-2000 ) Before this update, if a persistent volume claim (PVC) already existed, Elasticsearch generated an error, "Unable to create PersistentVolumeClaim due to forbidden: exceeded quota: infra-storage-quota." With this update, Elasticsearch checks for existing PVCs, resolving the issue. ( LOG-2118 ) 1.45.2. CVEs Example 1.20. Click to expand CVEs CVE-2021-3521 CVE-2021-3872 CVE-2021-3984 CVE-2021-4019 CVE-2021-4122 CVE-2021-4155 CVE-2021-4192 CVE-2021-4193 CVE-2022-0185 1.46. OpenShift Logging 5.2.6 This release includes RHSA-2022:0230 OpenShift Logging Bug Fix Release 5.2.6 1.46.1. Bug fixes Before this update, the release did not include a filter change which caused Fluentd to crash. With this update, the missing filter has been corrected. ( LOG-2104 ) This update changes the log4j dependency to 2.17.1 to resolve CVE-2021-44832 .( LOG-2101 ) 1.46.2. CVEs Example 1.21. Click to expand CVEs CVE-2021-27292 BZ-1940613 CVE-2021-44832 BZ-2035951 1.47. OpenShift Logging 5.2.5 This release includes RHSA-2022:0043 OpenShift Logging Bug Fix Release 5.2.5 1.47.1. Bug fixes Before this update, Elasticsearch rejected logs from the Event Router due to a parsing error. This update changes the data model to resolve the parsing error. However, as a result, indices might cause warnings or errors within Kibana. The kubernetes.event.metadata.resourceVersion field causes errors until existing indices are removed or reindexed. If this field is not used in Kibana, you can ignore the error messages. If you have a retention policy that deletes old indices, the policy eventually removes the old indices and stops the error messages. Otherwise, manually reindex to stop the error messages. LOG-2087 ) 1.47.2. CVEs Example 1.22. Click to expand CVEs CVE-2021-3712 CVE-2021-20321 CVE-2021-42574 CVE-2021-45105 1.48. OpenShift Logging 5.2.4 This release includes RHSA-2021:5127 OpenShift Logging Bug Fix Release 5.2.4 1.48.1. Bug fixes Before this update, records shipped via syslog would serialize a ruby hash encoding key/value pairs to contain a '⇒' character, as well as replace tabs with "#11". This update serializes the message correctly as proper JSON. ( LOG-1775 ) Before this update, the Elasticsearch Prometheus exporter plugin compiled index-level metrics using a high-cost query that impacted the Elasticsearch node performance. This update implements a lower-cost query that improves performance. ( LOG-1970 ) Before this update, Elasticsearch sometimes rejected messages when Log Forwarding was configured with multiple outputs. This happened because configuring one of the outputs modified message content to be a single message. With this update, Log Forwarding duplicates the messages for each output so that output-specific processing does not affect the other outputs. ( LOG-1824 ) 1.48.2. CVEs Example 1.23. Click to expand CVEs CVE-2018-25009 CVE-2018-25010 CVE-2018-25012 CVE-2018-25013 CVE-2018-25014 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14145 CVE-2020-14155 CVE-2020-16135 CVE-2020-17541 CVE-2020-24370 CVE-2020-35521 CVE-2020-35522 CVE-2020-35523 CVE-2020-35524 CVE-2020-36330 CVE-2020-36331 CVE-2020-36332 CVE-2021-3200 CVE-2021-3426 CVE-2021-3445 CVE-2021-3481 CVE-2021-3572 CVE-2021-3580 CVE-2021-3712 CVE-2021-3800 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-20317 CVE-2021-21409 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-27645 CVE-2021-28153 CVE-2021-31535 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 CVE-2021-37136 CVE-2021-37137 CVE-2021-42574 CVE-2021-43267 CVE-2021-43527 CVE-2021-44228 CVE-2021-45046 1.49. OpenShift Logging 5.2.3 This release includes RHSA-2021:4032 OpenShift Logging Bug Fix Release 5.2.3 1.49.1. Bug fixes Before this update, some alerts did not include a namespace label. This omission does not comply with the OpenShift Monitoring Team's guidelines for writing alerting rules in OpenShift Container Platform. With this update, all the alerts in Elasticsearch Operator include a namespace label and follow all the guidelines for writing alerting rules in OpenShift Container Platform. ( LOG-1857 ) Before this update, a regression introduced in a prior release intentionally disabled JSON message parsing. This update re-enables JSON parsing. It also sets the log entry level based on the level field in parsed JSON message or by using regex to extract a match from a message field. ( LOG-1759 ) 1.49.2. CVEs Example 1.24. Click to expand CVEs CVE-2021-23369 BZ-1948761 CVE-2021-23383 BZ-1956688 CVE-2018-20673 CVE-2019-5827 CVE-2019-13750 CVE-2019-13751 CVE-2019-17594 CVE-2019-17595 CVE-2019-18218 CVE-2019-19603 CVE-2019-20838 CVE-2020-12762 CVE-2020-13435 CVE-2020-14155 CVE-2020-16135 CVE-2020-24370 CVE-2021-3200 CVE-2021-3426 CVE-2021-3445 CVE-2021-3572 CVE-2021-3580 CVE-2021-3778 CVE-2021-3796 CVE-2021-3800 CVE-2021-20231 CVE-2021-20232 CVE-2021-20266 CVE-2021-22876 CVE-2021-22898 CVE-2021-22925 CVE-2021-23840 CVE-2021-23841 CVE-2021-27645 CVE-2021-28153 CVE-2021-33560 CVE-2021-33574 CVE-2021-35942 CVE-2021-36084 CVE-2021-36085 CVE-2021-36086 CVE-2021-36087 1.50. OpenShift Logging 5.2.2 This release includes RHBA-2021:3747 OpenShift Logging Bug Fix Release 5.2.2 1.50.1. Bug fixes Before this update, the ClusterLogging custom resource (CR) applied the value of the totalLimitSize field to the Fluentd total_limit_size field, even if the required buffer space was not available. With this update, the CR applies the lesser of the two totalLimitSize or 'default' values to the Fluentd total_limit_size field, resolving the issue.( LOG-1738 ) Before this update, a regression introduced in a prior release configuration caused the collector to flush its buffered messages before shutdown, creating a delay to the termination and restart of collector pods. With this update, Fluentd no longer flushes buffers at shutdown, resolving the issue. ( LOG-1739 ) Before this update, an issue in the bundle manifests prevented installation of the Elasticsearch Operator through OLM on OpenShift Container Platform 4.9. With this update, a correction to bundle manifests re-enables installation and upgrade in 4.9.( LOG-1780 ) 1.50.2. CVEs Example 1.25. Click to expand CVEs CVE-2020-25648 CVE-2021-22922 CVE-2021-22923 CVE-2021-22924 CVE-2021-36222 CVE-2021-37576 CVE-2021-37750 CVE-2021-38201 1.51. OpenShift Logging 5.2.1 This release includes RHBA-2021:3550 OpenShift Logging Bug Fix Release 5.2.1 1.51.1. Bug fixes Before this update, due to an issue in the release pipeline scripts, the value of the olm.skipRange field remained unchanged at 5.2.0 instead of reflecting the current release number. This update fixes the pipeline scripts to update the value of this field when the release numbers change. ( LOG-1743 ) 1.51.2. CVEs (None) 1.52. OpenShift Logging 5.2.0 This release includes RHBA-2021:3393 OpenShift Logging Bug Fix Release 5.2.0 1.52.1. New features and enhancements With this update, you can forward log data to Amazon CloudWatch, which provides application and infrastructure monitoring. For more information, see Forwarding logs to Amazon CloudWatch . ( LOG-1173 ) With this update, you can forward log data to Loki, a horizontally scalable, highly available, multi-tenant log aggregation system. For more information, see Forwarding logs to Loki . ( LOG-684 ) With this update, if you use the Fluentd forward protocol to forward log data over a TLS-encrypted connection, now you can use a password-encrypted private key file and specify the passphrase in the Cluster Log Forwarder configuration. For more information, see Forwarding logs using the Fluentd forward protocol . ( LOG-1525 ) This enhancement enables you to use a username and password to authenticate a log forwarding connection to an external Elasticsearch instance. For example, if you cannot use mutual TLS (mTLS) because a third-party operates the Elasticsearch instance, you can use HTTP or HTTPS and set a secret that contains the username and password. For more information, see Forwarding logs to an external Elasticsearch instance . ( LOG-1022 ) With this update, you can collect OVN network policy audit logs for forwarding to a logging server. ( LOG-1526 ) By default, the data model introduced in OpenShift Container Platform 4.5 gave logs from different namespaces a single index in common. This change made it harder to see which namespaces produced the most logs. The current release adds namespace metrics to the Logging dashboard in the OpenShift Container Platform console. With these metrics, you can see which namespaces produce logs and how many logs each namespace produces for a given timestamp. To see these metrics, open the Administrator perspective in the OpenShift Container Platform web console, and navigate to Observe Dashboards Logging/Elasticsearch . ( LOG-1680 ) The current release, OpenShift Logging 5.2, enables two new metrics: For a given timestamp or duration, you can see the total logs produced or logged by individual containers, and the total logs collected by the collector. These metrics are labeled by namespace, pod, and container name so that you can see how many logs each namespace and pod collects and produces. ( LOG-1213 ) 1.52.2. Bug fixes Before this update, when the OpenShift Elasticsearch Operator created index management cronjobs, it added the POLICY_MAPPING environment variable twice, which caused the apiserver to report the duplication. This update fixes the issue so that the POLICY_MAPPING environment variable is set only once per cronjob, and there is no duplication for the apiserver to report. ( LOG-1130 ) Before this update, suspending an Elasticsearch cluster to zero nodes did not suspend the index-management cronjobs, which put these cronjobs into maximum backoff. Then, after unsuspending the Elasticsearch cluster, these cronjobs stayed halted due to maximum backoff reached. This update resolves the issue by suspending the cronjobs and the cluster. ( LOG-1268 ) Before this update, in the Logging dashboard in the OpenShift Container Platform console, the list of top 10 log-producing containers was missing the "chart namespace" label and provided the incorrect metric name, fluentd_input_status_total_bytes_logged . With this update, the chart shows the namespace label and the correct metric name, log_logged_bytes_total . ( LOG-1271 ) Before this update, if an index management cronjob terminated with an error, it did not report the error exit code: instead, its job status was "complete." This update resolves the issue by reporting the error exit codes of index management cronjobs that terminate with errors. ( LOG-1273 ) The priorityclasses.v1beta1.scheduling.k8s.io was removed in 1.22 and replaced by priorityclasses.v1.scheduling.k8s.io ( v1beta1 was replaced by v1 ). Before this update, APIRemovedInNextReleaseInUse alerts were generated for priorityclasses because v1beta1 was still present . This update resolves the issue by replacing v1beta1 with v1 . The alert is no longer generated. ( LOG-1385 ) Previously, the OpenShift Elasticsearch Operator and Red Hat OpenShift Logging Operator did not have the annotation that was required for them to appear in the OpenShift Container Platform web console list of Operators that can run in a disconnected environment. This update adds the operators.openshift.io/infrastructure-features: '["Disconnected"]' annotation to these two Operators so that they appear in the list of Operators that run in disconnected environments. ( LOG-1420 ) Before this update, Red Hat OpenShift Logging Operator pods were scheduled on CPU cores that were reserved for customer workloads on performance-optimized single-node clusters. With this update, cluster logging Operator pods are scheduled on the correct CPU cores. ( LOG-1440 ) Before this update, some log entries had unrecognized UTF-8 bytes, which caused Elasticsearch to reject the messages and block the entire buffered payload. With this update, rejected payloads drop the invalid log entries and resubmit the remaining entries to resolve the issue. ( LOG-1499 ) Before this update, the kibana-proxy pod sometimes entered the CrashLoopBackoff state and logged the following message Invalid configuration: cookie_secret must be 16, 24, or 32 bytes to create an AES cipher when pass_access_token == true or cookie_refresh != 0, but is 29 bytes. The exact actual number of bytes could vary. With this update, the generation of the Kibana session secret has been corrected, and the kibana-proxy pod no longer enters a CrashLoopBackoff state due to this error. ( LOG-1446 ) Before this update, the AWS CloudWatch Fluentd plugin logged its AWS API calls to the Fluentd log at all log levels, consuming additional OpenShift Container Platform node resources. With this update, the AWS CloudWatch Fluentd plugin logs AWS API calls only at the "debug" and "trace" log levels. This way, at the default "warn" log level, Fluentd does not consume extra node resources. ( LOG-1071 ) Before this update, the Elasticsearch OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. ( LOG-1276 ) Before this update, in the Logging dashboard in the OpenShift Container Platform console, the list of top 10 log-producing containers lacked data points. This update resolves the issue, and the dashboard displays all data points. ( LOG-1353 ) Before this update, if you were tuning the performance of the Fluentd log forwarder by adjusting the chunkLimitSize and totalLimitSize values, the Setting queued_chunks_limit_size for each buffer to message reported values that were too low. The current update fixes this issue so that this message reports the correct values. ( LOG-1411 ) Before this update, the Kibana OpenDistro security plugin caused user index migrations to fail. This update resolves the issue by providing a newer version of the plugin. Now, index migrations proceed without errors. ( LOG-1558 ) Before this update, using a namespace input filter prevented logs in that namespace from appearing in other inputs. With this update, logs are sent to all inputs that can accept them. ( LOG-1570 ) Before this update, a missing license file for the viaq/logerr dependency caused license scanners to abort without success. With this update, the viaq/logerr dependency is licensed under Apache 2.0 and the license scanners run successfully. ( LOG-1590 ) Before this update, an incorrect brew tag for curator5 within the elasticsearch-operator-bundle build pipeline caused the pull of an image pinned to a dummy SHA1. With this update, the build pipeline uses the logging-curator5-rhel8 reference for curator5 , enabling index management cronjobs to pull the correct image from registry.redhat.io . ( LOG-1624 ) Before this update, an issue with the ServiceAccount permissions caused errors such as no permissions for [indices:admin/aliases/get] . With this update, a permission fix resolves the issue. ( LOG-1657 ) Before this update, the Custom Resource Definition (CRD) for the Red Hat OpenShift Logging Operator was missing the Loki output type, which caused the admission controller to reject the ClusterLogForwarder custom resource object. With this update, the CRD includes Loki as an output type so that administrators can configure ClusterLogForwarder to send logs to a Loki server. ( LOG-1683 ) Before this update, OpenShift Elasticsearch Operator reconciliation of the ServiceAccounts overwrote third-party-owned fields that contained secrets. This issue caused memory and CPU spikes due to frequent recreation of secrets. This update resolves the issue. Now, the OpenShift Elasticsearch Operator does not overwrite third-party-owned fields. ( LOG-1714 ) Before this update, in the ClusterLogging custom resource (CR) definition, if you specified a flush_interval value but did not set flush_mode to interval , the Red Hat OpenShift Logging Operator generated a Fluentd configuration. However, the Fluentd collector generated an error at runtime. With this update, the Red Hat OpenShift Logging Operator validates the ClusterLogging CR definition and only generates the Fluentd configuration if both fields are specified. ( LOG-1723 ) 1.52.3. Known issues If you forward logs to an external Elasticsearch server and then change a configured value in the pipeline secret, such as the username and password, the Fluentd forwarder loads the new secret but uses the old value to connect to an external Elasticsearch server. This issue happens because the Red Hat OpenShift Logging Operator does not currently monitor secrets for content changes. ( LOG-1652 ) As a workaround, if you change the secret, you can force the Fluentd pods to redeploy by entering: USD oc delete pod -l component=collector 1.52.4. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Logging and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. 1.52.5. Forwarding logs using the legacy Fluentd and legacy syslog methods have been deprecated From OpenShift Container Platform 4.6 to the present, forwarding logs by using the following legacy methods have been deprecated and will be removed in a future release: Forwarding logs using the legacy Fluentd method Forwarding logs using the legacy syslog method Instead, use the following non-legacy methods: Forwarding logs using the Fluentd fohttps://www.redhat.com/security/data/cve/CVE-2021-22922.htmlrward protocol Forwarding logs using the syslog protocol 1.52.6. CVEs Example 1.26. Click to expand CVEs CVE-2021-22922 CVE-2021-22923 CVE-2021-22924 CVE-2021-32740 CVE-2021-36222 CVE-2021-37750 | [
"oc -n openshift-logging edit ClusterLogging instance",
"apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" annotations: logging.openshift.io/preview-vector-collector: enabled spec: collection: logs: type: \"vector\" vector: {}",
"oc delete pod -l component=collector",
"oc delete pod -l component=collector"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/logging/release-notes |
2.5. Installing from a PXE Server | 2.5. Installing from a PXE Server To boot your computer using a PXE server, you need a properly configured server and a network interface in your computer that supports PXE. Configure the computer to boot from the network interface. This option is in the BIOS, and may be labeled Network Boot or Boot Services . Once you properly configure PXE booting, the computer can boot the Red Hat Gluster Storage Server installation system without any other media. To boot a computer from a PXE server: Ensure that the network cable is attached. The link indicator light on the network socket should be lit, even if the computer is not switched on. Switch on the computer. A menu screen appears. Press the number key that corresponds to the preferred option. If your computer does not boot from the netboot server, ensure that the BIOS is configured so that the computer boots first from the correct network interface. Some BIOS systems specify the network interface as a possible boot device, but do not support the PXE standard. See your hardware documentation for more information. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/installing_from_a_pxe_server |
20.12. Retrieving Information about Your Virtual Machine | 20.12. Retrieving Information about Your Virtual Machine 20.12.1. Displaying Device Block Statistics By default, the virsh domblkstat command displays the block statistics for the first block device defined for the domain. To view statistics of other block devices, use the virsh domblklist domain command to list all block devices, and then select a specific block device and display it by specifying either the Target or Source name from the virsh domblklist command output after the domain name. Note that not every hypervisor can display every field. To make sure that the output is presented in its most legible form use the --human argument. Example 20.21. How to display block statistics for a guest virtual machine The following example displays the devices that are defined for the guest1 virtual machine, and then lists the block statistics for that device. 20.12.2. Retrieving Network Interface Statistics The virsh domifstat domain interface-device command displays the network interface statistics for the specified device running on a given guest virtual machine. To determine which interface devices are defined for the domain, use the virsh domiflist command and use the output in the Interface column. Example 20.22. How to display networking statistics for a guest virtual machine The following example obtains the networking interface defined for the guest1 virtual machine, and then displays the networking statistics on the obtained interface ( macvtap0 ): 20.12.3. Modifying the Link State of a Guest Virtual Machine's Virtual Interface The virsh domif-setlink domain interface-device state command configures the status of the specified interface device link state as either up or down . To determine which interface devices are defined for the domain, use the virsh domiflist command and use either the Interface or MAC column as the interface device option. By default, virsh domif-setlink changes the link state for the running domain. To modify the domain's persistent configuration use the --config argument. Example 20.23. How to enable a guest virtual machine interface The following example shows determining the interface device of the rhel7 domain, then setting the link as down , and finally as up : 20.12.4. Listing the Link State of a Guest Virtual Machine's Virtual Interface The virsh domif-getlink domain interface-device command retrieves the specified interface device link state. To determine which interface devices are defined for the domain, use the virsh domiflist command and use either the Interface or MAC column as the interface device option. By default, virsh domif-getlink retrieves the link state for the running domain. To retrieve the domain's persistent configuration use the --config option . Example 20.24. How to display the link state of a guest virtual machine's interface The following example shows determining the interface device of the rhel7 domain, then determining its state as up , then changing the state to down , and then verifying the change was successful: 20.12.5. Setting Network Interface Bandwidth Parameters The virsh domiftune domain interface-device command either retrieves or sets the specified domain's interface bandwidth parameters. To determine which interface devices are defined for the domain, use the virsh domiflist command and use either the Interface or MAC column as the interface device option. The following format should be used: The --config , --live , and --current options are described in Section 20.43, "Setting Schedule Parameters" . If the --inbound or the --outbound option is not specified, virsh domiftune queries the specified network interface and displays the bandwidth settings. By specifying --inbound or --outbound , or both, and the average, peak, and burst values, virsh domiftune sets the bandwidth settings. At minimum the average value is required. In order to clear the bandwidth settings, provide 0 (zero). For a description of the average, peak, and burst values, see Section 20.27.6.2, "Attaching interface devices" . Example 20.25. How to set the guest virtual machine network interface parameters The following example sets eth0 parameters for the guest virtual machine named guest1 : # virsh domiftune guest1 eth0 outbound --live 20.12.6. Retrieving Memory Statistics The virsh dommemstat domain [<period in seconds>] [--config] [--live] [--current] command displays the memory statistics for a running guest virtual machine. Using the optional period switch requires a time period in seconds. Setting this option to a value larger than 0 will allow the balloon driver to return additional statistics which will be displayed by running subsequent dommemstat commands. Setting the period option to 0, stops the balloon driver collection but does not clear the statistics already in the balloon driver. You cannot use the --live , --config , or --current options without also setting the period option. If the --live option is specified, only the guest's running statistics will be collected. If the --config option is used, it will collect the statistics for a persistent guest, but only after the boot. If the --current option is used, it will collect the current statistics. Both the --live and --config options may be used but --current is exclusive. If no flag is specified, the guest's state will dictate the behavior of the statistics collection (running or not). Example 20.26. How to collect memory statistics for a running guest virtual machine The following example shows displaying the memory statistics in the rhel7 domain: 20.12.7. Displaying Errors on Block Devices The virsh domblkerror domain command lists all the block devices in the error state and the error detected on each of them. This command is best used after a virsh domstate command reports that a guest virtual machine is paused due to an I/O error. Example 20.27. How to display the block device errors for a virtual machine The following example displays the block device errors for the guest1 virtual machine: # virsh domblkerror guest1 20.12.8. Displaying the Block Device Size The virsh domblkinfo domain command lists the capacity, allocation, and physical block sizes for a specific block device in the virtual machine. Use the virsh domblklist command to list all block devices and then choose to display a specific block device by specifying either the Target or Source name from the virsh domblklist output after the domain name. Example 20.28. How to display the block device size In this example, you list block devices on the rhel7 virtual machine, and then display the block size for each of the devices. 20.12.9. Displaying the Block Devices Associated with a Guest Virtual Machine The virsh domblklist domain [--inactive] [--details] command displays a table of all block devices that are associated with the specified guest virtual machine. If --inactive is specified, the result will show the devices that are to be used at the boot and will not show those that are currently running in use by the running guest virtual machine. If --details is specified, the disk type and device value will be included in the table. The information displayed in this table can be used with other commands that require a block-device to be provided, such as virsh domblkinfo and virsh snapshot-create . The disk Target or Source contexts can also be used when generating the xmlfile context information for the virsh snapshot-create command. Example 20.29. How to display the block devices that are associated with a virtual machine The following example displays details about block devices associated with the rhel7 virtual machine. 20.12.10. Displaying Virtual Interfaces Associated with a Guest Virtual Machine The virsh domblklist domain command displays a table of all the virtual interfaces that are associated with the specified domain. The virsh domiflist command requires the name of the virtual machine (or domain ), and optionally can take the --inactive argument. The latter retrieves the inactive rather than the running configuration, which is retrieved with the default setting. If --inactive is specified, the result shows devices that are to be used at the boot, and does not show devices that are currently in use by the running guest. Virsh commands that require a MAC address of a virtual interface (such as detach-interface , domif-setlink , domif-getlink , domifstat , and domiftune ) accept the output displayed by this command. Example 20.30. How to display the virtual interfaces associated with a guest virtual machine The following example displays the virtual interfaces that are associated with the rhel7 virtual machine, and then displays the network interface statistics for the vnet0 device. | [
"virsh domblklist guest1 Target Source ------------------------------------------------ vda /VirtualMachines/guest1.img hdc - virsh domblkstat guest1 vda --human Device: vda number of read operations: 174670 number of bytes read: 3219440128 number of write operations: 23897 number of bytes written: 164849664 number of flush operations: 11577 total duration of reads (ns): 1005410244506 total duration of writes (ns): 1085306686457 total duration of flushes (ns): 340645193294",
"virsh domiflist guest1 Interface Type Source Model MAC ------------------------------------------------------- macvtap0 direct em1 rtl8139 12:34:00:0f:8a:4a virsh domifstat guest1 macvtap0 macvtap0 rx_bytes 51120 macvtap0 rx_packets 440 macvtap0 rx_errs 0 macvtap0 rx_drop 0 macvtap0 tx_bytes 231666 macvtap0 tx_packets 520 macvtap0 tx_errs 0 macvtap0 tx_drop 0",
"virsh domiflist rhel7 Interface Type Source Model MAC ------------------------------------------------------- vnet0 network default virtio 52:54:00:01:1d:d0 virsh domif-setlink rhel7 vnet0 down Device updated successfully virsh domif-setlink rhel7 52:54:00:01:1d:d0 up Device updated successfully",
"virsh domiflist rhel7 Interface Type Source Model MAC ------------------------------------------------------- vnet0 network default virtio 52:54:00:01:1d:d0 virsh domif-getlink rhel7 52:54:00:01:1d:d0 52:54:00:01:1d:d0 up virsh domif-setlink rhel7 vnet0 down Device updated successfully virsh domif-getlink rhel7 vnet0 vnet0 down",
"virsh domiftune domain interface [--inbound] [--outbound] [--config] [--live] [--current]",
"virsh dommemstat rhel7 actual 1048576 swap_in 0 swap_out 0 major_fault 2974 minor_fault 1272454 unused 246020 available 1011248 rss 865172",
"virsh domblklist rhel7 Target Source ------------------------------------------------ vda /home/vm-images/rhel7-os vdb /home/vm-images/rhel7-data virsh domblkinfo rhel7 vda Capacity: 10737418240 Allocation: 8211980288 Physical: 10737418240 virsh domblkinfo rhel7 /home/vm-images/rhel7-data Capacity: 104857600 Allocation: 104857600 Physical: 104857600",
"virsh domblklist rhel7 --details Type Device Target Source ------------------------------------------------ file disk vda /home/vm-images/rhel7-os file disk vdb /home/vm-images/rhel7-data",
"virsh domiflist rhel7 Interface Type Source Model MAC ------------------------------------------------------- vnet0 network default virtio 52:54:00:01:1d:d0 virsh domifstat rhel7 vnet0 vnet0 rx_bytes 55308 vnet0 rx_packets 969 vnet0 rx_errs 0 vnet0 rx_drop 0 vnet0 tx_bytes 14341 vnet0 tx_packets 148 vnet0 tx_errs 0 vnet0 tx_drop 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-statlists |
Chapter 7. Installation configuration parameters for AWS | Chapter 7. Installation configuration parameters for AWS Before you deploy an OpenShift Container Platform cluster on AWS, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 7.1. Available installation configuration parameters for AWS The following tables specify the required, optional, and AWS-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 7.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . + Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough , or Manual . + Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 7.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 7.4. Optional AWS parameters Parameter Description Values The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . The size in GiB of the root volume. Integer, for example 500 . The type of the root volume. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. The Input/Output Operations Per Second (IOPS) that is reserved for the root volume on control plane machines. Integer, for example 4000 . The size in GiB of the root volume for control plane machines. Integer, for example 500 . The type of the root volume for control plane machines. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . An Amazon Resource Name (ARN) for an existing IAM role in the account containing the specified hosted zone. The installation program and cluster operators will assume this role when performing operations on the hosted zone. This parameter should only be used if you are installing a cluster into a shared VPC. String, for example arn:aws:iam::1234567890:role/shared-vpc-role . The AWS service endpoint name and URL. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name and valid AWS service endpoint URL. A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. For clusters that use AWS Local Zones, you must add AWS Local Zone subnets to this list to ensure edge machine pool creation. Valid subnet IDs. The public IPv4 pool ID that is used to allocate Elastic IPs (EIPs) when publish is set to External . You must provision and advertise the pool in the same AWS account and region of the cluster. You must ensure that you have 2n + 1 IPv4 available in the pool where n is the total number of AWS zones used to deploy the Network Load Balancer (NLB) for API, NAT gateways, and bootstrap node. For more information about bring your own IP addresses (BYOIP) in AWS, see Onboard your BYOIP . A valid public IPv4 pool id Note BYOIP can be enabled only for customized installations that have no network restrictions. Prevents the S3 bucket from being deleted after completion of bootstrapping. true or false . The default value is false , which results in the S3 bucket being deleted. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"platform: aws: lbType:",
"publish:",
"sshKey:",
"compute: platform: aws: amiID:",
"compute: platform: aws: iamRole:",
"compute: platform: aws: rootVolume: iops:",
"compute: platform: aws: rootVolume: size:",
"compute: platform: aws: rootVolume: type:",
"compute: platform: aws: rootVolume: kmsKeyARN:",
"compute: platform: aws: type:",
"compute: platform: aws: zones:",
"compute: aws: region:",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"controlPlane: platform: aws: amiID:",
"controlPlane: platform: aws: iamRole:",
"controlPlane: platform: aws: rootVolume: iops:",
"controlPlane: platform: aws: rootVolume: size:",
"controlPlane: platform: aws: rootVolume: type:",
"controlPlane: platform: aws: rootVolume: kmsKeyARN:",
"controlPlane: platform: aws: type:",
"controlPlane: platform: aws: zones:",
"controlPlane: aws: region:",
"platform: aws: amiID:",
"platform: aws: hostedZone:",
"platform: aws: hostedZoneRole:",
"platform: aws: serviceEndpoints: - name: url:",
"platform: aws: userTags:",
"platform: aws: propagateUserTags:",
"platform: aws: subnets:",
"platform: aws: publicIpv4Pool:",
"platform: aws: preserveBootstrapIgnition:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_aws/installation-config-parameters-aws |
Chapter 1. Implementing consistent network interface naming | Chapter 1. Implementing consistent network interface naming The udev device manager implements consistent device naming in Red Hat Enterprise Linux. The device manager supports different naming schemes and, by default, assigns fixed names based on firmware, topology, and location information. Without consistent device naming, the Linux kernel assigns names to network interfaces by combining a fixed prefix and an index. The index increases as the kernel initializes the network devices. For example, eth0 represents the first Ethernet device being probed on start-up. If you add another network interface controller to the system, the assignment of the kernel device names is no longer fixed because, after a reboot, the devices can initialize in a different order. In that case, the kernel can name the devices differently. To solve this problem, udev assigns consistent device names. This has the following advantages: Device names are stable across reboots. Device names stay fixed even if you add or remove hardware. Defective hardware can be seamlessly replaced. The network naming is stateless and does not require explicit configuration files. Warning Generally, Red Hat does not support systems where consistent device naming is disabled. For exceptions, see the Red Hat Knowledgebase solution Is it safe to set net.ifnames=0 . 1.1. How the udev device manager renames network interfaces To implement a consistent naming scheme for network interfaces, the udev device manager processes the following rule files in the listed order: Optional: /usr/lib/udev/rules.d/60-net.rules The /usr/lib/udev/rules.d/60-net.rules file defines that the deprecated /usr/lib/udev/rename_device helper utility searches for the HWADDR parameter in /etc/sysconfig/network-scripts/ifcfg-* files. If the value set in the variable matches the MAC address of an interface, the helper utility renames the interface to the name set in the DEVICE parameter of the ifcfg file. If the system uses only NetworkManager connection profiles in keyfile format, udev skips this step. Only on Dell systems: /usr/lib/udev/rules.d/71-biosdevname.rules This file exists only if the biosdevname package is installed, and the rules file defines that the biosdevname utility renames the interface according to its naming policy, if it was not renamed in the step. Note Install and use biosdevname only on Dell systems. /usr/lib/udev/rules.d/75-net-description.rules This file defines how udev examines the network interface and sets the properties in udev -internal variables. These variables are then processed in the step by the /usr/lib/udev/rules.d/80-net-setup-link.rules file. Some of the properties can be undefined. /usr/lib/udev/rules.d/80-net-setup-link.rules This file calls the net_setup_link builtin of the udev service, and udev renames the interface based on the order of the policies in the NamePolicy parameter in the /usr/lib/systemd/network/99-default.link file. For further details, see Network interface naming policies . If none of the policies applies, udev does not rename the interface. Additional resources Why are systemd network interface names different between major RHEL versions (Red Hat Knowledgebase) 1.2. Network interface naming policies By default, the udev device manager uses the /usr/lib/systemd/network/99-default.link file to determine which device naming policies to apply when it renames interfaces. The NamePolicy parameter in this file defines which policies udev uses and in which order: The following table describes the different actions of udev based on which policy matches first as specified by the NamePolicy parameter: Policy Description Example name kernel If the kernel indicates that a device name is predictable, udev does not rename this device. lo database This policy assigns names based on mappings in the udev hardware database. For details, see the hwdb(7) man page on your system. idrac onboard Device names incorporate firmware or BIOS-provided index numbers for onboard devices. eno1 slot Device names incorporate firmware or BIOS-provided PCI Express (PCIe) hot-plug slot-index numbers. ens1 path Device names incorporate the physical location of the connector of the hardware. enp1s0 mac Device names incorporate the MAC address. By default, Red Hat Enterprise Linux does not use this policy, but administrators can enable it. enx525400d5e0fb Additional resources How the udev device manager renames network interfaces systemd.link(5) man page on your system 1.3. Network interface naming schemes The udev device manager uses certain stable interface attributes that device drivers provide to generate consistent device names. If a new udev version changes how the service creates names for certain interfaces, Red Hat adds a new scheme version and documents the details in the systemd.net-naming-scheme(7) man page on your system. By default, Red Hat Enterprise Linux (RHEL) 8 uses the rhel-8.0 naming scheme, even if you install or update to a later minor version of RHEL. If you want to use a scheme other than the default, you can switch the network interface naming scheme . For further details about the naming schemes for different device types and platforms, see the systemd.net-naming-scheme(7) man page on your system. 1.4. Switching to a different network interface naming scheme By default, Red Hat Enterprise Linux (RHEL) 8 uses the rhel-8.0 naming scheme, even if you install or update to a later minor version of RHEL. While the default naming scheme fits in most scenarios, there might be reasons to switch to a different scheme version, for example: A new scheme can help to better identify a device if it adds additional attributes, such as a slot number, to an interface name. An new scheme can prevent udev from falling back to the kernel-assigned device names ( eth* ). This happens if the driver does not provide enough unique attributes for two or more interfaces to generate unique names for them. Prerequisites You have access to the console of the server. Procedure List the network interfaces: Record the MAC addresses of the interfaces. Optional: Display the ID_NET_NAMING_SCHEME property of a network interface to identify the naming scheme that RHEL currently uses: Note that the property is not available on the lo loopback device. Append the net.naming-scheme= <scheme> option to the command line of all installed kernels, for example: Reboot the system. Based on the MAC addresses you recorded, identify the new names of network interfaces that have changed due to the different naming scheme: After switching the scheme, udev names in this example the device with MAC address 00:00:5e:00:53:1a eno1np0 , whereas it was named eno1 before. Identify which NetworkManager connection profile uses an interface with the name: Set the connection.interface-name property in the connection profile to the new interface name: Reactivate the connection profile: Verification Identify the naming scheme that RHEL now uses by displaying the ID_NET_NAMING_SCHEME property of a network interface: Additional resources Network interface naming schemes 1.5. Determining a predictable RoCE device name on the IBM Z platform On Red Hat Enterprise Linux (RHEL) 8.7 and later, the udev device manager sets names for RoCE interfaces on IBM Z as follows: If the host enforces a unique identifier (UID) for a device, udev assigns a consistent device name that is based on the UID, for example eno <UID_in_decimal> . If the host does not enforce a UID for a device, the behavior depends on your settings: By default, udev uses unpredictable names for the device. If you set the net.naming-scheme=rhel-8.7 kernel command line option, udev assigns a consistent device name that is based on the function identifier (FID) of the device, for example ens <FID_in_decimal> . Manually configure predictable device name for RoCE interfaces on IBM Z in the following cases: Your host runs RHEL 8.6 or earlier and enforces a UID for a device, and you plan to update to RHEL 8.7 or later. After an update to RHEL 8.7 or later, udev uses consistent interface names. However, if you used unpredictable device names before the update, NetworkManager connection profiles still use these names and fail to activate until you update the affected profiles. Your host runs RHEL 8.7 or later and does not enforce a UID, and you plan to upgrade to RHEL 9. Before you can use a udev rule or a systemd link file to rename an interface manually, you must determine a predictable device name. Prerequisites An RoCE controller is installed in the system. The sysfsutils package is installed. Procedure Display the available network devices, and note the names of the RoCE devices: Display the device path in the /sys/ file system: Use the path shown in the Device path field in the steps. Display the value of the <device_path> /uid_id_unique file, for example: The displayed value indicates whether UID uniqueness is enforced or not, and you require this value in later steps. Determine a unique identifier: If UID uniqueness is enforced ( 1 ), display the UID stored in the <device_path> /uid file, for example: If UID uniqueness is not enforced ( 0 ), display the FID stored in the <device_path> /function_id file, for example: The outputs of the commands display the UID and FID values in hexadecimal. Convert the hexadecimal identifier to decimal, for example: To determine the predictable device name, append the identifier in decimal format to the corresponding prefix based on whether UID uniqueness is enforced or not: If UID uniqueness is enforced, append the identifier to the eno prefix, for example eno5122 . If UID uniqueness is not enforced, append the identifier to the ens prefix, for example ens5122 . steps Use one of the following methods to rename the interface to the predictable name: Configuring user-defined network interface names by using udev rules Configuring user-defined network interface names by using systemd link files Additional resources IBM documentation: Network interface names systemd.net-naming-scheme(7) man page on your system 1.6. Customizing the prefix for Ethernet interfaces during installation If you do not want to use the default device-naming policy for Ethernet interfaces, you can set a custom device prefix during the Red Hat Enterprise Linux (RHEL) installation. Important Red Hat supports systems with customized Ethernet prefixes only if you set the prefix during the RHEL installation. Using the prefixdevname utility on already deployed systems is not supported. If you set a device prefix during the installation, the udev service uses the <prefix><index> format for Ethernet interfaces after the installation. For example, if you set the prefix net , the service assigns the names net0 , net1 , and so on to the Ethernet interfaces. The udev service appends the index to the custom prefix, and preserves the index values of known Ethernet interfaces. If you add an interface, udev assigns an index value that is one greater than the previously-assigned index value to the new interface. Prerequisites The prefix consists of ASCII characters. The prefix is an alphanumeric string. The prefix is shorter than 16 characters. The prefix does not conflict with any other well-known network interface prefix, such as eth , eno , ens , and em . Procedure Boot the Red Hat Enterprise Linux installation media. In the boot manager, follow these steps: Select the Install Red Hat Enterprise Linux <version> entry. Press Tab to edit the entry. Append net.ifnames.prefix= <prefix> to the kernel options. Press Enter to start the installation program. Install Red Hat Enterprise Linux. Verification To verify the interface names, display the network interfaces: Additional resources Interactively installing RHEL from installation media 1.7. Configuring user-defined network interface names by using udev rules You can use udev rules to implement custom network interface names that reflect your organization's requirements. Procedure Identify the network interface that you want to rename: Record the MAC address of the interface. Display the device type ID of the interface: Create the /etc/udev/rules.d/70-persistent-net.rules file, and add a rule for each interface that you want to rename: Important Use only 70-persistent-net.rules as a file name if you require consistent device names during the boot process. The dracut utility adds a file with this name to the initrd image if you regenerate the RAM disk image. For example, use the following rule to rename the interface with MAC address 00:00:5e:00:53:1a to provider0 : Optional: Regenerate the initrd RAM disk image: You require this step only if you need networking capabilities in the RAM disk. For example, this is the case if the root file system is stored on a network device, such as iSCSI. Identify which NetworkManager connection profile uses the interface that you want to rename: Unset the connection.interface-name property in the connection profile: Temporarily, configure the connection profile to match both the new and the interface name: Reboot the system: Verify that the device with the MAC address that you specified in the link file has been renamed to provider0 : Configure the connection profile to match only the new interface name: You have now removed the old interface name from the connection profile. Reactivate the connection profile: Additional resources udev(7) man page on your system 1.8. Configuring user-defined network interface names by using systemd link files You can use systemd link files to implement custom network interface names that reflect your organization's requirements. Prerequisites You must meet one of these conditions: NetworkManager does not manage this interface, or the corresponding connection profile uses the keyfile format . Procedure Identify the network interface that you want to rename: Record the MAC address of the interface. If it does not already exist, create the /etc/systemd/network/ directory: For each interface that you want to rename, create a 70-*.link file in the /etc/systemd/network/ directory with the following content: Important Use a file name with a 70- prefix to keep the file names consistent with the udev rules-based solution. For example, create the /etc/systemd/network/70-provider0.link file with the following content to rename the interface with MAC address 00:00:5e:00:53:1a to provider0 : Optional: Regenerate the initrd RAM disk image: You require this step only if you need networking capabilities in the RAM disk. For example, this is the case if the root file system is stored on a network device, such as iSCSI. Identify which NetworkManager connection profile uses the interface that you want to rename: Unset the connection.interface-name property in the connection profile: Temporarily, configure the connection profile to match both the new and the interface name: Reboot the system: Verify that the device with the MAC address that you specified in the link file has been renamed to provider0 : Configure the connection profile to match only the new interface name: You have now removed the old interface name from the connection profile. Reactivate the connection profile. Additional resources systemd.link(5) man page on your system 1.9. Assigning alternative names to a network interface by using systemd link files With alternative interface naming, the kernel can assign additional names to network interfaces. You can use these alternative names in the same way as the normal interface names in commands that require a network interface name. Prerequisites You must use ASCII characters for the alternative name. The alternative name must be shorter than 128 characters. Procedure Display the network interface names and their MAC addresses: Record the MAC address of the interface to which you want to assign an alternative name. If it does not already exist, create the /etc/systemd/network/ directory: For each interface that must have an alternative name, create a copy of the /usr/lib/systemd/network/99-default.link file with a unique name and .link suffix in the /etc/systemd/network/ directory, for example: Modify the file you created in the step. Rewrite the [Match] section as follows, and append the AlternativeName entries to the [Link] section: For example, create the /etc/systemd/network/70-altname.link file with the following content to assign provider as an alternative name to the interface with MAC address 00:00:5e:00:53:1a : Regenerate the initrd RAM disk image: Reboot the system: Verification Use the alternative interface name. For example, display the IP address settings of the device with the alternative name provider : Additional resources What is AlternativeNamesPolicy in Interface naming scheme? (Red Hat Knowledgebase) | [
"NamePolicy=kernel database onboard slot path",
"ip link show 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff",
"udevadm info --query=property --property=ID_NET_NAMING_SCHEME /sys/class/net/eno1' ID_NET_NAMING_SCHEME=rhel-8.0",
"grubby --update-kernel=ALL --args=net.naming-scheme= rhel-8.4",
"reboot",
"ip link show 2: eno1np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff",
"nmcli -f device,name connection show DEVICE NAME eno1 example_profile",
"nmcli connection modify example_profile connection.interface-name \"eno1np0\"",
"nmcli connection up example_profile",
"udevadm info --query=property --property=ID_NET_NAMING_SCHEME /sys/class/net/eno1np0' ID_NET_NAMING_SCHEME=_rhel-8.4",
"ip link show 2: enP5165p0s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000",
"systool -c net -p Class = \"net\" Class Device = \"enP5165p0s0\" Class Device path = \"/sys/devices/pci142d:00/142d:00:00.0/net/enP5165p0s0\" Device = \"142d:00:00.0\" Device path = \"/sys/devices/pci142d:00/142d:00:00.0\"",
"cat /sys/devices/pci142d:00/142d:00:00.0/uid_id_unique",
"cat /sys/devices/pci142d:00/142d:00:00.0/uid",
"cat /sys/devices/pci142d:00/142d:00:00.0/function_id",
"printf \"%d\\n\" 0x00001402 5122",
"ip link show 2: net0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff",
"ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff",
"cat /sys/class/net/enp1s0/type 1",
"SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\" <MAC_address> \",ATTR{type}==\" <device_type_id> \",NAME=\" <new_interface_name> \"",
"SUBSYSTEM==\"net\",ACTION==\"add\",ATTR{address}==\" 00:00:5e:00:53:1a \",ATTR{type}==\" 1 \",NAME=\" provider0 \"",
"dracut -f",
"nmcli -f device,name connection show DEVICE NAME enp1s0 example_profile",
"nmcli connection modify example_profile connection.interface-name \"\"",
"nmcli connection modify example_profile match.interface-name \"provider0 enp1s0\"",
"reboot",
"ip link show provider0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff",
"nmcli connection modify example_profile match.interface-name \"provider0\"",
"nmcli connection up example_profile",
"ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff",
"mkdir -p /etc/systemd/network/",
"[Match] MACAddress= <MAC_address> [Link] Name= <new_interface_name>",
"[Match] MACAddress=00:00:5e:00:53:1a [Link] Name=provider0",
"dracut -f",
"nmcli -f device,name connection show DEVICE NAME enp1s0 example_profile",
"nmcli connection modify example_profile connection.interface-name \"\"",
"nmcli connection modify example_profile match.interface-name \"provider0 enp1s0\"",
"reboot",
"ip link show provider0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff",
"nmcli connection modify example_profile match.interface-name \"provider0\"",
"nmcli connection up example_profile",
"ip link show enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff",
"mkdir -p /etc/systemd/network/",
"cp /usr/lib/systemd/network/99-default.link /etc/systemd/network/98-lan.link",
"[Match] MACAddress= <MAC_address> [Link] AlternativeName= <alternative_interface_name_1> AlternativeName= <alternative_interface_name_2> AlternativeName= <alternative_interface_name_n>",
"[Match] MACAddress=00:00:5e:00:53:1a [Link] NamePolicy=kernel database onboard slot path AlternativeNamesPolicy=database onboard slot path MACAddressPolicy=persistent AlternativeName=provider",
"dracut -f",
"reboot",
"ip address show provider 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 00:00:5e:00:53:1a brd ff:ff:ff:ff:ff:ff altname provider"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/consistent-network-interface-device-naming_configuring-and-managing-networking |
17.3.3. Trouble with Partition Tables | 17.3.3. Trouble with Partition Tables If you receive an error after the Disk Partitioning Setup ( Section 16.15, "Disk Partitioning Setup" ) phase of the installation saying something similar to The partition table on device hda was unreadable. To create new partitions it must be initialized, causing the loss of ALL DATA on this drive. you may not have a partition table on that drive or the partition table on the drive may not be recognizable by the partitioning software used in the installation program. No matter what type of installation you are performing, backups of the existing data on your systems should always be made. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-trouble-part-tables-ppc |
Technical Notes | Technical Notes Red Hat Virtualization 4.4 Technical notes for Red Hat Virtualization 4.4 and associated packages Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract The Technical Notes document provides information about changes made between release 4.3 and release 4.4 of Red Hat Virtualization. This document is intended to supplement the information contained in the text of the relevant errata advisories available through the Content Delivery Network. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_notes/index |
Chapter 3. Configuring Visual Studio Code to use Dependency Analytics | Chapter 3. Configuring Visual Studio Code to use Dependency Analytics You can gain access to Red Hat's Trusted Profile Analyzer service by using the Dependency Analytics extension for Microsoft's Visual Studio Code (VS Code) editor application. With this extension you get access to the latest open source vulnerability information, and insights about your application's dependent packages. The Red Hat Dependency Analytics extension uses the following data sources for the most up-to-date vulnerability information available: The ONGuard service, integrates the Open Source Vulnerability (OSV) and the National Vulnerability Database (NVD) data sources. When given a set of packages to the ONGuard service, a query to OSV retrieves the associated vulnerability information, and then a query to NVD for public Common Vulnerability and Exposures (CVE) information. Dependency Analytics supports the following programming languages: Maven Node Python Go Important Visual Studio Code by default, executes binaries directly in a terminal found in your system's PATH environment. You can configure Visual Studio Code to look somewhere else to run the necessary binaries. You can configure this by accessing the extension settings . Click the Workspace tab, search for the word executable , and specify the absolute path to the binary file you want to use for Maven, Node, Python, or Go. Note The Dependency Analytics extension is an online service maintained by Red Hat. Dependency Analytics only accesses your manifest files to analyze your application dependencies before displaying the results. Prerequisites Install Visual Studio Code on your workstation. For Maven projects, analyzing a pom.xml file, you must have the mvn binary in your system's PATH environment. For Node projects, analyzing a package.json file, you must have the npm binary in your system's PATH environment. For Go projects, analyzing a go.mod file, you must have the go binary in your system's PATH environment. For Python projects, analyzing a requirements.txt file, you must have the python3/pip3 or python/pip binaries in your system's PATH environment. Also, the Python application needs to be in VS Code's interpreter path . Procedure Open the Visual Studio Code application. From the file menu, click View , and click Extensions . Search the Marketplace for Red Hat Dependency Analytics . Click the Install button to install the extension. Wait for the installation to finish. To start scanning your application for security vulnerabilities, and view the vulnerability report, you can do one of the following: Open a manifest file, hover over a dependency marked by the inline Component Analysis, indicated by the wavy-red line under a dependency name, click Quick Fix , and click Detailed Vulnerability Report . Open a manifest file, and click the pie chart icon. Right click on a manifest file in the Explorer view, and click Red Hat Dependency Analytics Report... . From the vulnerability pop-up alert message, click Open detailed vulnerability report . Additional resources Red Hat Dependency Analytics Visual Studio marketplace page . The GitHub project . | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/quick_start_guide/configuring-visual-studio-code-to-use-dependency-analytics_qsg |
RPM Packaging Guide | RPM Packaging Guide Red Hat Enterprise Linux 7 Basic and advanced software packaging scenarios using the RPM package manager Customer Content Services [email protected] Marie Dolezelova Red Hat Customer Content Services [email protected] Maxim Svistunov Red Hat Customer Content Services Adam Miller Red Hat Adam Kvitek Red Hat Customer Content Services Petr Kovar Red Hat Customer Content Services Miroslav Suchy Red Hat | [
"Name: hello-world Version: 1 Release: 1 Summary: Most simple RPM package License: FIXME %description This is my first RPM package, which does nothing. %prep we have no source, so nothing here %build cat > hello-world.sh <<EOF #!/usr/bin/bash echo Hello world EOF %install mkdir -p %{buildroot}/usr/bin/ install -m 755 hello-world.sh %{buildroot}/usr/bin/hello-world.sh %files /usr/bin/hello-world.sh %changelog let's skip this for now",
"rpmdev-setuptree rpmbuild -ba hello-world.spec",
"... [SNIP] Wrote: /home/<username>/rpmbuild/SRPMS/hello-world-1-1.src.rpm Wrote: /home/<username>/rpmbuild/RPMS/x86_64/hello-world-1-1.x86_64.rpm Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.wgaJzv + umask 022 + cd /home/<username>/rpmbuild/BUILD + /usr/bin/rm -rf /home/<username>/rpmbuild/BUILDROOT/hello-world-1-1.x86_64 + exit 0",
"#!/bin/bash printf \"Hello World\\n\"",
"#!/usr/bin/python3 print(\"Hello World\")",
"#include <stdio.h> int main(void) { printf(\"Hello World\\n\"); return 0; }",
"#include <stdio.h> int main(void) { printf(\"Hello World\\n\"); return 0; }",
"gcc -g -o cello cello.c",
"./cello Hello World",
"cello: gcc -g -o cello cello.c clean: rm cello",
"make make: 'cello' is up to date.",
"make clean rm cello make gcc -g -o cello cello.c",
"make make: 'cello' is up to date.",
"./cello Hello World",
"#!/usr/bin/python3 print(\"Hello World\")",
"python -m compileall pello.py file pello.pyc pello.pyc: python 2.7 byte-compiled",
"python pello.pyc Hello World",
"#!/bin/bash printf \"Hello World\\n\"",
"chmod +x bello ./bello Hello World",
"cp -p cello.c cello.c.orig",
"#include <stdio.h> int main(void) { printf(\"Hello World from my very first patch!\\n\"); return 0; }",
"diff -Naur cello.c.orig cello.c --- cello.c.orig 2016-05-26 17:21:30.478523360 -0500 + cello.c 2016-05-27 14:53:20.668588245 -0500 @@ -1,6 +1,6 @@ #include<stdio.h> int main(void){ - printf(\"Hello World!\\n\"); + printf(\"Hello World from my very first patch!\\n\"); return 0; } \\ No newline at end of file",
"diff -Naur cello.c.orig cello.c > cello-output-first-patch.patch",
"cp cello.c.orig cello.c",
"patch < cello-output-first-patch.patch patching file cello.c",
"cat cello.c #include<stdio.h> int main(void){ printf(\"Hello World from my very first patch!\\n\"); return 1; }",
"make clean rm cello make gcc -g -o cello cello.c ./cello Hello World from my very first patch!",
"sudo install -m 0755 bello /usr/bin/bello",
"cd ~ bello Hello World",
"cello: gcc -g -o cello cello.c clean: rm cello install: mkdir -p USD(DESTDIR)/usr/bin install -m 0755 cello USD(DESTDIR)/usr/bin/cello",
"make gcc -g -o cello cello.c sudo make install install -m 0755 cello /usr/bin/cello",
"cd ~ cello Hello World",
"cat /tmp/LICENSE This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/ .",
"mkdir /tmp/bello-0.1 mv ~/bello /tmp/bello-0.1/ cp /tmp/LICENSE /tmp/bello-0.1/",
"cd /tmp/ tar -cvzf bello-0.1.tar.gz bello-0.1 bello-0.1/ bello-0.1/LICENSE bello-0.1/bello mv /tmp/bello-0.1.tar.gz ~/rpmbuild/SOURCES/",
"mkdir /tmp/pello-0.1.2 mv ~/pello.py /tmp/pello-0.1.2/ cp /tmp/LICENSE /tmp/pello-0.1.2/",
"cd /tmp/ tar -cvzf pello-0.1.2.tar.gz pello-0.1.2 pello-0.1.2/ pello-0.1.2/LICENSE pello-0.1.2/pello.py mv /tmp/pello-0.1.2.tar.gz ~/rpmbuild/SOURCES/",
"mkdir /tmp/cello-1.0 mv ~/cello.c /tmp/cello-1.0/ mv ~/Makefile /tmp/cello-1.0/ cp /tmp/LICENSE /tmp/cello-1.0/",
"cd /tmp/ tar -cvzf cello-1.0.tar.gz cello-1.0 cello-1.0/ cello-1.0/Makefile cello-1.0/cello.c cello-1.0/LICENSE mv /tmp/cello-1.0.tar.gz ~/rpmbuild/SOURCES/",
"mv ~/cello-output-first-patch.patch ~/rpmbuild/SOURCES/",
"yum install rpmdevtools",
"rpm -ql rpmdevtools | grep bin",
"yum install rpmdevtools",
"rpmdev-setuptree tree ~/rpmbuild/ /home/<username>/rpmbuild/ |-- BUILD |-- RPMS |-- SOURCES |-- SPECS `-- SRPMS 5 directories, 0 files",
"rpm -q bash bash-4.2.46-34.el7.x86_64",
"rpm --eval %{_MACRO}",
"rpm --eval %{_bindir} /usr/bin rpm --eval %{_libexecdir} /usr/libexec",
"On a RHEL 8.x machine rpm --eval %{?dist} .el8",
"cd ~/rpmbuild/SPECS rpmdev-newspec bello bello.spec created; type minimal, rpm version >= 4.11. rpmdev-newspec cello cello.spec created; type minimal, rpm version >= 4.11. rpmdev-newspec pello pello.spec created; type minimal, rpm version >= 4.11.",
"Name: bello Version: 0.1 Release: 1%{?dist} Summary: Hello World example implemented in bash script License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz Requires: bash BuildArch: noarch %description The long-tail description for our Hello World Example implemented in bash script. %prep %setup -q %build %install mkdir -p %{buildroot}/%{_bindir} install -m 0755 %{name} %{buildroot}/%{_bindir}/%{name} %files %license LICENSE %{_bindir}/%{name} %changelog * Tue May 31 2016 Adam Miller < [email protected] > - 0.1-1 - First bello package - Example second item in the changelog for version-release 0.1-1",
"Name: pello Version: 0.1.1 Release: 1%{?dist} Summary: Hello World example implemented in Python License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz BuildRequires: python Requires: python Requires: bash BuildArch: noarch %description The long-tail description for our Hello World Example implemented in Python. %prep %setup -q %build python -m compileall %{name}.py %install mkdir -p %{buildroot}/%{_bindir} mkdir -p %{buildroot}/usr/lib/%{name} cat > %{buildroot}/%{_bindir}/%{name} <<-EOF #!/bin/bash /usr/bin/python /usr/lib/%{name}/%{name}.pyc EOF chmod 0755 %{buildroot}/%{_bindir}/%{name} install -m 0644 %{name}.py* %{buildroot}/usr/lib/%{name}/ %files %license LICENSE %dir /usr/lib/%{name}/ %{_bindir}/%{name} /usr/lib/%{name}/%{name}.py* %changelog * Tue May 31 2016 Adam Miller < [email protected] > - 0.1.1-1 - First pello package",
"Name: cello Version: 1.0 Release: 1%{?dist} Summary: Hello World example implemented in C License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz Patch0: cello-output-first-patch.patch BuildRequires: gcc BuildRequires: make %description The long-tail description for our Hello World Example implemented in C. %prep %setup -q %patch0 %build make %{?_smp_mflags} %install %make_install %files %license LICENSE %{_bindir}/%{name} %changelog * Tue May 31 2016 Adam Miller < [email protected] > - 1.0-1 - First cello package",
"rpmbuild -bs SPECFILE",
"cd ~/rpmbuild/SPECS/ 8USD rpmbuild -bs bello.spec Wrote: /home/<username>/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm rpmbuild -bs pello.spec Wrote: /home/<username>/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm rpmbuild -bs cello.spec Wrote: /home/<username>/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm",
"rpmbuild --rebuild ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm [output truncated] rpmbuild --rebuild ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm [output truncated] rpmbuild --rebuild ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm [output truncated]",
"rpm -Uvh ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm Updating / installing... 1:bello-0.1-1.el8 [100%] rpm -Uvh ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm Updating / installing... ... 1:pello-0.1.2-1.el8 [100%] rpm -Uvh ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm Updating / installing... ... 1:cello-1.0-1.el8 [100%]",
"rpmbuild -bb ~/rpmbuild/SPECS/bello.spec rpmbuild -bb ~/rpmbuild/SPECS/pello.spec rpmbuild -bb ~/rpmbuild/SPECS/cello.spec",
"rpmbuild {-ra|-rb|-rp|-rc|-ri|-rl|-rs} [rpmbuild-options] SOURCEPACKAGE",
"rpmlint bello.spec bello.spec: W: invalid-url Source0: https://www.example.com/bello/releases/bello-0.1.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 0 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm bello.src: W: invalid-url URL: https://www.example.com/bello HTTP Error 404: Not Found bello.src: W: invalid-url Source0: https://www.example.com/bello/releases/bello-0.1.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 0 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/noarch/bello-0.1-1.el8.noarch.rpm bello.noarch: W: invalid-url URL: https://www.example.com/bello HTTP Error 404: Not Found bello.noarch: W: no-documentation bello.noarch: W: no-manual-page-for-binary bello 1 packages and 0 specfiles checked; 0 errors, 3 warnings.",
"rpmlint pello.spec pello.spec:30: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name} pello.spec:34: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.pyc pello.spec:39: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name}/ pello.spec:43: E: hardcoded-library-path in /usr/lib/%{name}/ pello.spec:45: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.py* pello.spec: W: invalid-url Source0: https://www.example.com/pello/releases/pello-0.1.2.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 5 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm pello.src: W: invalid-url URL: https://www.example.com/pello HTTP Error 404: Not Found pello.src:30: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name} pello.src:34: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.pyc pello.src:39: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name}/ pello.src:43: E: hardcoded-library-path in /usr/lib/%{name}/ pello.src:45: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.py* pello.src: W: invalid-url Source0: https://www.example.com/pello/releases/pello-0.1.2.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 5 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/noarch/pello-0.1.2-1.el8.noarch.rpm pello.noarch: W: invalid-url URL: https://www.example.com/pello HTTP Error 404: Not Found pello.noarch: W: only-non-binary-in-usr-lib pello.noarch: W: no-documentation pello.noarch: E: non-executable-script /usr/lib/pello/pello.py 0644L /usr/bin/env pello.noarch: W: no-manual-page-for-binary pello 1 packages and 0 specfiles checked; 1 errors, 4 warnings.",
"rpmlint ~/rpmbuild/SPECS/cello.spec /home/<username>/rpmbuild/SPECS/cello.spec: W: invalid-url Source0: https://www.example.com/cello/releases/cello-1.0.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 0 errors, 1 warnings.",
"rpmlint ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm cello.src: W: invalid-url URL: https://www.example.com/cello HTTP Error 404: Not Found cello.src: W: invalid-url Source0: https://www.example.com/cello/releases/cello-1.0.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 0 errors, 2 warnings.",
"rpmlint ~/rpmbuild/RPMS/x86_64/cello-1.0-1.el8.x86_64.rpm cello.x86_64: W: invalid-url URL: https://www.example.com/cello HTTP Error 404: Not Found cello.x86_64: W: no-documentation cello.x86_64: W: no-manual-page-for-binary cello 1 packages and 0 specfiles checked; 0 errors, 3 warnings.",
"# gpg --gen-key",
"# gpg --list-keys",
"# gpg --export -a '<Key_name>' > RPM-GPG-KEY-pmanager",
"# rpm --import RPM-GPG-KEY-pmanager",
"rpm --addsign blather-7.9-1.x86_64.rpm",
"rpm --checksig blather-7.9-1.x86_64.rpm blather-7.9-1.x86_64.rpm: size pgp pgp md5 OK",
"rpm --resign blather-7.9-1.x86_64.rpm",
"rpm --resign b*.rpm",
"rpmbuild blather-7.9.spec",
"rpmsign --addsign blather-7.9-1.x86_64.rpm",
"rpm --checksig blather-7.9-1.x86_64.rpm blather-7.9-1.x86_64.rpm: size pgp md5 OK",
"rpmbuild -ba --sign b*.spec",
"%global <name>[(opts)] <body>",
"Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.DhddsG",
"cd '/builddir/build/BUILD' rm -rf 'cello-1.0' /usr/bin/gzip -dc '/builddir/build/SOURCES/cello-1.0.tar.gz' | /usr/bin/tar -xof - STATUS=USD? if [ USDSTATUS -ne 0 ]; then exit USDSTATUS fi cd 'cello-1.0' /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .",
"Name: cello Source0: https://example.com/%{name}/release/hello-%{version}.tar.gz ... %prep %setup -n hello",
"/usr/bin/mkdir -p cello-1.0 cd 'cello-1.0'",
"rm -rf 'cello-1.0'",
"/usr/bin/gzip -dc '/builddir/build/SOURCES/cello-1.0.tar.gz' | /usr/bin/tar -xvvof -",
"Source0: https://example.com/%{name}/release/%{name}-%{version}.tar.gz Source1: examples.tar.gz ... %prep %setup -a 1",
"Source0: https://example.com/%{name}/release/%{name}-%{version}.tar.gz Source1: %{name}-%{version}-examples.tar.gz ... %prep %setup -b 1",
"--showrc",
"-ql rpm",
"--showrc",
"--eval %{_MACRO}",
"%_topdir /opt/some/working/directory/rpmbuild",
"%_smp_mflags -l3",
"rpm --nopretrans",
"rpm --showrc | grep systemd -14: transaction_systemd_inhibit %{ plugindir}/systemd_inhibit.so -14: _journalcatalogdir /usr/lib/systemd/catalog -14: _presetdir /usr/lib/systemd/system-preset -14: _unitdir /usr/lib/systemd/system -14: _userunitdir /usr/lib/systemd/user /usr/lib/systemd/systemd-binfmt %{? } >/dev/null 2>&1 || : /usr/lib/systemd/systemd-sysctl %{? } >/dev/null 2>&1 || : -14: systemd_post -14: systemd_postun -14: systemd_postun_with_restart -14: systemd_preun -14: systemd_requires Requires(post): systemd Requires(preun): systemd Requires(postun): systemd -14: systemd_user_post %systemd_post --user --global %{? } -14: systemd_user_postun %{nil} -14: systemd_user_postun_with_restart %{nil} -14: systemd_user_preun systemd-sysusers %{? } >/dev/null 2>&1 || : echo %{? } | systemd-sysusers - >/dev/null 2>&1 || : systemd-tmpfiles --create %{? } >/dev/null 2>&1 || : rpm --eval %{systemd_post} if [ USD1 -eq 1 ] ; then # Initial installation systemctl preset >/dev/null 2>&1 || : fi rpm --eval %{systemd_postun} systemctl daemon-reload >/dev/null 2>&1 || : rpm --eval %{systemd_preun} if [ USD1 -eq 0 ] ; then # Package removal, not upgrade systemctl --no-reload disable > /dev/null 2>&1 || : systemctl stop > /dev/null 2>&1 || : fi",
"all-%pretrans ... any-%triggerprein (%triggerprein from other packages set off by new install) new-%triggerprein new-%pre for new version of package being installed ... (all new files are installed) new-%post for new version of package being installed any-%triggerin (%triggerin from other packages set off by new install) new-%triggerin old-%triggerun any-%triggerun (%triggerun from other packages set off by old uninstall) old-%preun for old version of package being removed ... (all old files are removed) old-%postun for old version of package being removed old-%triggerpostun any-%triggerpostun (%triggerpostun from other packages set off by old un install) ... all-%posttrans",
"install -m 0644 %{name}.py* %{buildroot}/usr/lib/%{name}/",
"%post -p /usr/bin/python3 print(\"This is {} code\".format(\"python\"))",
"yum install /home/<username>/rpmbuild/RPMS/noarch/pello-0.1.2-1.el8.noarch.rpm",
"Installing : pello-0.1.2-1.el8.noarch 1/1 Running scriptlet: pello-0.1.2-1.el8.noarch 1/1 This is python code",
"%post -p /usr/bin/python3",
"%post -p <lua>",
"%if expression ... %endif",
"%if expression ... %else ... %endif",
"%if 0%{?rhel} == 8 sed -i '/AS_FUNCTION_DESCRIBE/ s/^/ /' configure.in sed -i '/AS_FUNCTION_DESCRIBE/ s/^/ /' acinclude.m4 %endif",
"%define ruby_archive %{name}-%{ruby_version} %if 0%{?milestone:1}%{?revision:1} != 0 %define ruby_archive %{ruby_archive}-%{?milestone}%{?!milestone:%{?revision:r%{revision}}} %endif",
"%ifarch i386 sparc ... %endif",
"%ifnarch alpha ... %endif",
"%ifos linux ... %endif"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html-single/rpm_packaging_guide/index |
10.5. Using KVM virtio Drivers for New Devices | 10.5. Using KVM virtio Drivers for New Devices This procedure covers creating new devices using the KVM virtio drivers with virt-manager . Alternatively, the virsh attach-disk or virsh attach-interface commands can be used to attach devices using the virtio drivers. Important Ensure the drivers have been installed on the Windows guest before proceeding to install new devices. If the drivers are unavailable the device will not be recognized and will not work. Procedure 10.5. Adding a storage device using the virtio storage driver Open the guest virtual machine by double clicking on the name of the guest in virt-manager . Open the Show virtual hardware details tab by clicking the lightbulb button. Figure 10.27. The Show virtual hardware details tab In the Show virtual hardware details tab, click on the Add Hardware button. Select hardware type Select Storage as the Hardware type . Figure 10.28. The Add new virtual hardware wizard Select the storage device and driver Create a new disk image or select a storage pool volume. Set the Device type to Virtio disk to use the virtio drivers. Figure 10.29. The Add new virtual hardware wizard Click Finish to complete the procedure. Procedure 10.6. Adding a network device using the virtio network driver Open the guest virtual machine by double clicking on the name of the guest in virt-manager . Open the Show virtual hardware details tab by clicking the lightbulb button. Figure 10.30. The Show virtual hardware details tab In the Show virtual hardware details tab, click on the Add Hardware button. Select hardware type Select Network as the Hardware type . Figure 10.31. The Add new virtual hardware wizard Select the network device and driver Set the Device model to virtio to use the virtio drivers. Choose the desired Host device . Figure 10.32. The Add new virtual hardware wizard Click Finish to complete the procedure. Once all new devices are added, reboot the virtual machine. Windows virtual machines may not recognize the devices until the guest is rebooted. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/sect-virtualization_host_configuration_and_guest_installation_guide-para_virtualized_drivers-installing_the_kvm_windows_para_virtualized_drivers-using_kvm_para_virtualized_drivers_for_new_devices |
2.2.8.2. NFS and Sendmail | 2.2.8.2. NFS and Sendmail Never put the mail spool directory, /var/spool/mail/ , on an NFS shared volume. Because NFSv2 and NFSv3 do not maintain control over user and group IDs, two or more users can have the same UID, and receive and read each other's mail. Note With NFSv4 using Kerberos, this is not the case, since the SECRPC_GSS kernel module does not utilize UID-based authentication. However, it is still considered good practice not to put the mail spool directory on NFS shared volumes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_sendmail-nfs_and_sendmail |
31.5. Hot Rod Cross Site Cluster Failover | 31.5. Hot Rod Cross Site Cluster Failover Besides in-cluster failover, Hot Rod clients can failover to different clusters each representing independent sites. Hot Rod Cross Site cluster failover is available in both automatic and manual modes. Automatic Cross Site Cluster Failover If the main/primary cluster nodes are unavailable, the client application checks for alternatively defined clusters and will attempt to failover to them. Upon successful failover, the client will remain connected to the alternative cluster until it becomes unavailable. After that, the client will try to failover to other defined clusters and finally switch over to the main/primary cluster with the original server settings if the connectivity is restored. To configure an alternative cluster in the Hot Rod client, provide details of at least one host/port pair for each of the clusters configured as shown in the following example. Example 31.5. Configure Alternate Cluster Note Regardless of the cluster definitions, the initial server(s) configuration must be provided unless the initial servers can be resolved using the default server host and port details. Manual Cross Site Cluster Failover For manual site cluster switchover, call RemoteCacheManager's switchToCluster(clusterName) or switchToDefaultCluster() . Using switchToCluster(clusterName) , users can force a client to switch to one of the clusters pre-defined in the Hot Rod client configuration. To switch to the initial servers defined in the client configuration, call switchToDefaultCluster() . Report a bug | [
"org.infinispan.client.hotrod.configuration.ConfigurationBuilder cb = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder(); cb.addCluster(\"remote-cluster\").addClusterNode(\"remote-cluster-host\", 11222); RemoteCacheManager rcm = new RemoteCacheManager(cb.build());"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/hot_rod_cross_site_cluster_failover |
Chapter 1. OpenShift Container Platform 4.10 release notes | Chapter 1. OpenShift Container Platform 4.10 release notes Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP. Built on Red Hat Enterprise Linux (RHEL) and Kubernetes, OpenShift Container Platform provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements. 1.1. About this release OpenShift Container Platform ( RHSA-2022:0056 ) is now available. This release uses Kubernetes 1.23 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.10 are included in this topic. Red Hat did not publicly release OpenShift Container Platform 4.10.0 as the GA version and, instead, is releasing OpenShift Container Platform 4.10.3 as the GA version. OpenShift Container Platform 4.10 clusters are available at https://console.redhat.com/openshift . The Red Hat OpenShift Cluster Manager application for OpenShift Container Platform allows you to deploy OpenShift clusters to either on-premise or cloud environments. OpenShift Container Platform 4.10 is supported on Red Hat Enterprise Linux (RHEL) 8.4 through 8.7, as well as on Red Hat Enterprise Linux CoreOS (RHCOS) 4.10. You must use RHCOS machines for the control plane, and you can use either RHCOS or RHEL for compute machines. 1.2. OpenShift Container Platform layered and dependent component support and compatibility The scope of support for layered and dependent components of OpenShift Container Platform changes independently of the OpenShift Container Platform version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy . 1.3. New features and enhancements This release adds improvements related to the following components and concepts. 1.3.1. Documentation 1.3.1.1. Getting started with OpenShift Container Platform OpenShift Container Platform 4.10 now includes a getting started guide. Getting Started with OpenShift Container Platform defines basic terminology and provides role-based steps for developers and administrators. The tutorials walk new users through the web console and the OpenShift CLI ( oc ) interfaces. New users can accomplish the following tasks through the Getting Started: Create a project Grant view permissions Deploy a container image from Quay Examine and scale an application Deploy a Python application from GitHub Connect to a database from Quay Create a secret Load and view your application For more information, see Getting Started with OpenShift Container Platform . 1.3.2. Red Hat Enterprise Linux CoreOS (RHCOS) 1.3.2.1. Improved customization of bare metal RHCOS installation The coreos-installer utility now has iso customize and pxe customize subcommands for more flexible customization when installing RHCOS on bare metal from the live ISO and PXE images. This includes the ability to customize the installation to fetch Ignition configs from HTTPS servers that use a custom certificate authority or self-signed certificate. 1.3.2.2. RHCOS now uses RHEL 8.4 RHCOS now uses Red Hat Enterprise Linux (RHEL) 8.4 packages in OpenShift Container Platform 4.10. These packages provide you the latest fixes, features, and enhancements, such as NetworkManager features, as well as the latest hardware support and driver updates. 1.3.3. Installation and upgrade 1.3.3.1. New default component types for AWS installations The OpenShift Container Platform 4.10 installer uses new default component types for installations on AWS. The installation program uses the following components by default: AWS EC2 M6i instances for both control plane and compute nodes, where available AWS EBS gp3 storage 1.3.3.2. Enhancements to API in the install-config.yaml file Previously, when a user installed OpenShift Container Platform on a bare metal installer-provisioned infrastructure, they had nowhere to configure custom network interfaces, such as static IPs or vLANs to communicate with the Ironic server. When configuring a Day 1 installation on bare metal only, users can now use the API in the install-config.yaml file to customize the network configuration ( networkConfig ). This configuration is set during the installation and provisioning process and includes advanced options, such as setting static IPs per host. 1.3.3.3. OpenShift Container Platform on ARM OpenShift Container Platform 4.10 is now supported on ARM based AWS EC2 and bare-metal platforms. Instance availability and installation documentation can be found in Supported installation methods for different platforms . The following features are supported for OpenShift Container Platform on ARM: OpenShift Cluster Monitoring RHEL 8 Application Streams OVNKube Elastic Block Store (EBS) for AWS AWS .NET applications NFS storage on bare metal The following Operators are supported for OpenShift Container Platform on ARM: Node Tuning Operator Node Feature Discovery Operator Cluster Samples Operator Cluster Logging Operator Elasticsearch Operator Service Binding Operator 1.3.3.4. Installing a cluster on IBM Cloud using installer-provisioned infrastructure (Technology Preview) OpenShift Container Platform 4.10 introduces support for installing a cluster on IBM Cloud using installer-provisioned infrastructure in Technology Preview. The following limitations apply for IBM Cloud using IPI: Deploying IBM Cloud using IPI on a previously existing network is not supported. The Cloud Credential Operator (CCO) can use only Manual mode. Mint mode or STS are not supported. IBM Cloud DNS Services is not supported. An instance of IBM Cloud Internet Services is required. Private or disconnected deployments are not supported. For more information, see Preparing to install on IBM Cloud . 1.3.3.5. Thin provisioning support for VMware vSphere cluster installation OpenShift Container Platform 4.10 introduces support for thin-provisioned disks when you install a cluster using installer-provisioned infrastructure. You can provision disks as thin , thick , or eagerZeroedThick . For more information about disk provisioning modes in VMware vSphere, see Installation configuration parameters . 1.3.3.6. Installing a cluster into an Amazon Web Services GovCloud region Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Images (AMIs) are now available for AWS GovCloud regions. The availability of these AMIs improves the installation process because you are no longer required to upload a custom RHCOS AMI to deploy a cluster. For more information, see Installing a cluster on AWS into a government region . 1.3.3.7. Using a custom AWS IAM role for instance profiles Beginning with OpenShift Container Platform 4.10, if you configure a cluster with an existing IAM role, the installation program no longer adds the shared tag to the role when deploying the cluster. This enhancement improves the installation process for organizations that want to use a custom IAM role, but whose security policies prevent the use of the shared tag. 1.3.3.8. CSI driver installation on vSphere clusters To install a CSI driver on a cluster running on vSphere, you must have the following components installed: Virtual hardware version 15 or later vSphere version 6.7 Update 3 or later VMware ESXi version 6.7 Update 3 or later Components with versions earlier than those above are still supported, but are deprecated. These versions are still fully supported, but version 4.11 of OpenShift Container Platform will require vSphere virtual hardware version 15 or later. Note If your cluster is deployed on vSphere, and the preceding components are lower than the version mentioned above, upgrading from OpenShift Container Platform 4.9 to 4.10 on vSphere is supported, but no vSphere CSI driver will be installed. Bug fixes and other upgrades to 4.10 are still supported, however upgrading to 4.11 will be unavailable. 1.3.3.9. Installing a cluster on Alibaba Cloud using installer-provisioned infrastructure (Technology Preview) OpenShift Container Platform 4.10 introduces the ability for installing a cluster on Alibaba Cloud using installer-provisioned infrastructure in Technology Preview. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains. 1.3.3.10. Installing a cluster on Microsoft Azure Stack Hub using installer-provisioned infrastructure OpenShift Container Platform 4.10 introduces support for installing a cluster on Azure Stack Hub using installer-provisioned infrastructure. This type of installation lets you use the installation program to deploy a cluster on infrastructure that the installation program provisions and the cluster maintains. Note Beginning with OpenShift Container Platform 4.10.14, you can deploy control plane and compute nodes with the premium_LRS , standardSSD_LRS , or standard_LRS disk type. By default, the installation program deploys control plane and compute nodes with the premium_LRS disk type. In earlier 4.10 releases, only the standard_LRS disk type was supported. For more information, see Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure . 1.3.3.11. Conditional updates OpenShift Container Platform 4.10 adds support for consuming conditional update paths provided by the OpenShift Update Service. Conditional update paths convey identified risks and the conditions under which those risks apply to clusters. The Administrator perspective on the web console only offers recommended upgrade paths for which the cluster does not match known risks. However, OpenShift CLI ( oc ) 4.10 or later can be used to display additional upgrade paths for OpenShift Container Platform 4.10 clusters. Associated risk information including supporting documentation references is displayed with the paths. The administrator may review the referenced materials and choose to perform the supported, but no longer recommended, upgrade. For more information, see Conditional updates and Updating along a conditional upgrade path . 1.3.3.12. Disconnected mirroring with the oc-mirror CLI plugin (Technology Preview) This release introduces the oc-mirror OpenShift CLI ( oc ) plugin as a Technology Preview. You can use the oc-mirror plugin to mirror images in a disconnected environment. For more information, see Mirroring images for a disconnected installation using the oc-mirror plugin . 1.3.3.13. Installing a cluster on RHOSP that uses OVS-DPDK You can now install a cluster on Red Hat OpenStack Platform (RHOSP) for which compute machines run on Open vSwitch with the Data Plane Development Kit (OVS-DPDK) networks. Workloads that run on these machines can benefit from the performance and latency improvements of OVS-DPDK. For more information, see Installing a cluster on RHOSP that supports DPDK-connected compute machines . 1.3.3.14. Setting compute machine affinity during installation on RHOSP You can now select compute machine affinity when you install a cluster on RHOSP. By default, compute machines are deployed with a soft-anti-affinity server policy, but you can also choose anti-affinity or soft-affinity policies. 1.3.4. Web console 1.3.4.1. Developer perspective With this update, you can specify the name of a service binding connector in the Topology view while making a binding connection. With this update, creating pipelines workflow has now been enhanced: You can now choose a user-defined pipeline from a drop-down list while importing your application from the Import from Git pipeline workflow. Default webhooks are added for the pipelines that are created using Import from Git workflow and the URL is visible in the side panel of the selected resources in the Topology view. You can now opt out of the default Tekton Hub integration by setting the parameter enable-devconsole-integration to false in the TektonConfig custom resource. Example TektonConfig CR to opt out of Tekton Hub integration ... hub: params: - name: enable-devconsole-integration value: 'false' ... Pipeline builder contains the Tekton Hub tasks that are supported by the cluster, all other unsupported tasks are excluded from the list. With this update, the application export workflow now displays the export logs dialog or alert while the export is in progress. You can use the dialog to cancel or restart the exporting process. With this update, you can add your new Helm Chart Repository to the Developer Catalog by creating a custom resource. Refer to the quick start guides in the Developer perspective to add a new ProjectHelmChartRepository . With this update, you can now access community devfiles samples using the Developer Catalog . 1.3.4.2. Dynamic plugin (Technology Preview) Starting with OpenShift Container Platform 4.10, the ability to create OpenShift console dynamic plugins is now available as a Technology Preview feature. You can use this feature to customize your interface at runtime in many ways, including: Adding custom pages Adding perspectives and updating navigation items Adding tabs and actions to resource pages For more information about the dynamic plugin, see Adding a dynamic plugin to the OpenShift Container Platform web console . 1.3.4.3. Running a pod in debug mode With this update, you can now view debug terminals in the web console. When a pod has a container that is in a CrashLoopBackOff state, a debug pod can be launched. A terminal interface is displayed and can be used to debug the crash looping container. This feature can be accessed by the pod status pop-up window, which is accessed by clicking on the status of a pod, provides links to debug terminals for each crash looping container within that pod. You can also access this feature on the Logs tab of the pod details page. A debug terminal link is displayed above the log window when a crash looping container is selected. Additionally, the pod status pop-up window now provides links to the Logs and Events tabs of the pod details page. 1.3.4.4. Customized workload notifications With this update, you can customize workload notifications on the User Preferences page. User workload notifications under the Notifications tab allows you to hide user workload notifications that appear on the Cluster Overview page or in your drawer. 1.3.4.5. Improved quota visibility With this update, non-admin users are now able to view their usage of the AppliedClusterResourceQuota on the Project Overview , ResourceQuotas , and API Explorer pages to determine the cluster-scoped quota available for use. Additionally, AppliedClusterResourceQuota details can now be found on the Search page. 1.3.4.6. Cluster support level OpenShift Container Platform now enables you to view support level information about your cluster on the Overview Details card, in the Cluster Settings , in the About modal, and adds a notification to your notifications drawer when your cluster is unsupported. From the Overview page, you can manage subscription settings under the Service Level Agreement (SLA) . 1.3.5. IBM Z and LinuxONE With this release, IBM Z and LinuxONE are now compatible with OpenShift Container Platform 4.10. The installation can be performed with z/VM or RHEL KVM. For installation instructions, see the following documentation: Installing a cluster with z/VM on IBM Z and LinuxONE Installing a cluster with z/VM on IBM Z and LinuxONE in a restricted network Installing a cluster with RHEL KVM on IBM Z and LinuxONE Installing a cluster with RHEL KVM on IBM Z and LinuxONE in a restricted network Notable enhancements The following new features are supported on IBM Z and LinuxONE with OpenShift Container Platform 4.10: Horizontal pod autoscaling The following Multus CNI plugins are supported: Bridge Host-device IPAM IPVLAN Compliance Operator 0.1.49 NMState Operator OVN-Kubernetes IPsec encryption Vertical Pod Autoscaler Operator Supported features The following features are also supported on IBM Z and LinuxONE: Currently, the following Operators are supported: Cluster Logging Operator Compliance Operator 0.1.49 Local Storage Operator NFD Operator NMState Operator OpenShift Elasticsearch Operator Service Binding Operator Vertical Pod Autoscaler Operator Encrypting data stored in etcd Helm Multipathing Persistent storage using iSCSI Persistent storage using local volumes (Local Storage Operator) Persistent storage using hostPath Persistent storage using Fibre Channel Persistent storage using Raw Block OVN-Kubernetes Support for multiple network interfaces Three-node cluster support z/VM Emulated FBA devices on SCSI disks 4K FCP block device These features are available only for OpenShift Container Platform on IBM Z and LinuxONE for 4.10: HyperPAV enabled on IBM Z and LinuxONE for the virtual machines for FICON attached ECKD storage Restrictions The following restrictions impact OpenShift Container Platform on IBM Z and LinuxONE: The following OpenShift Container Platform Technology Preview features are unsupported: Precision Time Protocol (PTP) hardware The following OpenShift Container Platform features are unsupported: Automatic repair of damaged machines with machine health checking CodeReady Containers (CRC) Controlling overcommit and managing container density on nodes CSI volume cloning CSI volume snapshots FIPS cryptography NVMe OpenShift Metering OpenShift Virtualization Tang mode disk encryption during OpenShift Container Platform deployment Worker nodes must run Red Hat Enterprise Linux CoreOS (RHCOS) Persistent shared storage must be provisioned by using either OpenShift Data Foundation or other supported storage protocols Persistent non-shared storage must be provisioned using local storage, like iSCSI, FC, or using LSO with DASD, FCP, or EDEV/FBA 1.3.6. IBM Power With this release, IBM Power is now compatible with OpenShift Container Platform 4.10. For installation instructions, see the following documentation: Installing a cluster on IBM Power Installing a cluster on IBM Power in a restricted network Notable enhancements The following new features are supported on IBM Power with OpenShift Container Platform 4.10: Horizontal pod autoscaling The following Multus CNI plugins are supported: Bridge Host-device IPAM IPVLAN Compliance Operator 0.1.49 NMState Operator OVN-Kubernetes IPsec encryption Vertical Pod Autoscaler Operator Supported features The following features are also supported on IBM Power: Currently, the following Operators are supported: Cluster Logging Operator Compliance Operator 0.1.49 Local Storage Operator NFD Operator NMState Operator OpenShift Elasticsearch Operator SR-IOV Network Operator Service Binding Operator Vertical Pod Autoscaler Operator Encrypting data stored in etcd Helm Multipathing Multus SR-IOV NVMe OVN-Kubernetes Persistent storage using iSCSI Persistent storage using local volumes (Local Storage Operator) Persistent storage using hostPath Persistent storage using Fibre Channel Persistent storage using Raw Block Support for multiple network interfaces Support for Power10 Three-node cluster support 4K Disk Support Restrictions The following restrictions impact OpenShift Container Platform on IBM Power: The following OpenShift Container Platform Technology Preview features are unsupported: Precision Time Protocol (PTP) hardware The following OpenShift Container Platform features are unsupported: Automatic repair of damaged machines with machine health checking CodeReady Containers (CRC) Controlling overcommit and managing container density on nodes FIPS cryptography OpenShift Metering OpenShift Virtualization Tang mode disk encryption during OpenShift Container Platform deployment Worker nodes must run Red Hat Enterprise Linux CoreOS (RHCOS) Persistent storage must be of the Filesystem type that uses local volumes, OpenShift Data Foundation, Network File System (NFS), or Container Storage Interface (CSI) 1.3.7. Security and compliance Information regarding new features, enhancements, and bug fixes for security and compliance components can be found in the Compliance Operator and File Integrity Operator release notes. For more information about security and compliance, see OpenShift Container Platform security and compliance . 1.3.8. Networking 1.3.8.1. Dual-stack services require that ipFamilyPolicy is specified When you create a service that uses multiple IP address families, you must explicitly specify ipFamilyPolicy: PreferDualStack or ipFamilyPolicy: RequireDualStack in your Service object definition. This change breaks backward compatibility with earlier releases of OpenShift Container Platform. For more information, see BZ#2045576 . 1.3.8.2. Change cluster network MTU after cluster installation After cluster installation, if you are using the OpenShift SDN cluster network provider or the OVN-Kubernetes cluster network provider, you can change your hardware MTU and your cluster network MTU values. Changing the MTU across the cluster is disruptive and requires that each node is rebooted several times. For more information, see Changing the cluster network MTU . 1.3.8.3. OVN-Kubernetes support for gateway configuration The OVN-Kubernetes CNI network provider adds support for configuring how egress traffic is sent to the node gateway. By default, egress traffic is processed in OVN to exit the cluster and traffic is not affected by specialized routes in the kernel routing table. This enhancement adds a gatewayConfig.routingViaHost field. With this update, the field can be set at runtime as a post-installation activity and when it is set to true , egress traffic is sent from pods to the host networking stack. This update benefits highly specialized installations and applications that rely on manually configured routes in the kernel routing table. This enhancement has an interaction with the Open vSwitch hardware offloading feature. With this update, when the gatewayConfig.routingViaHost field is set to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Important To set egress traffic, use gatewayConfig.routingViaHost and delete the gateway-mode-config config map if you have set it up in the openshift-network-operator namespace. For more information regarding the gateway-mode-config solution and setting OVN-Kubernetes gateway mode in OpenShift Container Platform 4.10 and higher, see the soultion . For configuration information, see Configuration for the OVN-Kubernetes CNI cluster network provider . 1.3.8.4. Enhancements to networking metrics The following metrics are now available for clusters. The metric names that start with sdn_controller are unique to the OpenShift SDN CNI network provider. The metric names that start with ovn are unique to the OVN-Kubernetes CNI network provider: network_attachment_definition_instances{networks="egress-router"} openshift_unidle_events_total ovn_controller_bfd_run ovn_controller_ct_zone_commit ovn_controller_flow_generation ovn_controller_flow_installation ovn_controller_if_status_mgr ovn_controller_if_status_mgr_run ovn_controller_if_status_mgr_update ovn_controller_integration_bridge_openflow_total ovn_controller_ofctrl_seqno_run ovn_controller_patch_run ovn_controller_pinctrl_run ovnkube_master_ipsec_enabled ovnkube_master_num_egress_firewall_rules ovnkube_master_num_egress_firewalls ovnkube_master_num_egress_ips ovnkube_master_pod_first_seen_lsp_created_duration_seconds ovnkube_master_pod_lsp_created_port_binding_duration_seconds ovnkube_master_pod_port_binding_chassis_port_binding_up_duration_seconds ovnkube_master_pod_port_binding_port_binding_chassis_duration_seconds sdn_controller_num_egress_firewall_rules sdn_controller_num_egress_firewalls sdn_controller_num_egress_ips The ovnkube_master_resource_update_total metric is removed for the 4.10 release. 1.3.8.5. Switching between YAML view and a web console form Previously, changes were not retained when switching between YAML view and Form view on the web console. Additionally, after switching to YAML view , you could not return to Form view . With this update, you can now easily switch between YAML view and Form view on the web console without losing changes. 1.3.8.6. Listing pods targeted by network policies When using the network policy functionality in the OpenShift Container Platform web console, the pods affected by a policy are listed. The list changes as the combined namespace and pod selectors in these policy sections are modified: Peer definition Rule definition Ingress Egress The list of impacted pods includes only those pods accessible by the user. 1.3.8.7. Enhancement to must-gather to simplify network tracing The oc adm must-gather command is enhanced in a way that simplifies collecting network packet captures. Previously, oc adm must-gather could start a single debug pod only. With this enhancement, you can start a debug pod on multiple nodes at the same time. You can use the enhancement to run packet captures on multiple nodes at the same time to simplify troubleshooting network communication issues. A new --node-selector argument provides a way to identify which nodes you are collecting packet captures for. For more information, see Network trace methods and Collecting a host network trace . 1.3.8.8. Pod-level bonding for secondary networks Bonding at the pod level is vital to enable workloads inside pods that require high availability and more throughput. With pod-level bonding, you can create a bond interface from multiple single root I/O virtualization (SR-IOV) virtual function interfaces in kernel mode interface. The SR-IOV virtual functions are passed into the pod and attached to a kernel driver. Scenarios where pod-level bonding is required include creating a bond interface from multiple SR-IOV virtual functions on different physical functions. Creating a bond interface from two different physical functions on the host can be used to achieve high availability at pod level. 1.3.8.9. Egress IP address support for clusters installed on public clouds As a cluster administrator, you can associate one or more egress IP addresses with a namespace. An egress IP address ensures that a consistent source IP address is associated with traffic from a particular namespace that is leaving the cluster. For the OVN-Kubernetes and OpenShift SDN cluster network providers, you can configure an egress IP address on the following public cloud providers: Amazon Web Services (AWS) Google Cloud Platform (GCP) Microsoft Azure To learn more, refer to the respective documentation for your cluster network provider: 1.3.8.10. OpenShift SDN cluster network provider network policy support for egress policies and ipBlock except If you use the OpenShift SDN cluster network provider, you can now use egress rules in network policy with ipBlock and ipBlock.except . You define egress policies in the egress array of the NetworkPolicy object. For more information, refer to About network policy . 1.3.8.11. Ingress Controller router compression This enhancement adds the ability to configure global HTTP traffic compression on the HAProxy Ingress Controller for specific MIME types. This update enables gzip-compression of your ingress workloads when there are large amounts of compressible routed traffic. For more information, see Using router compression . 1.3.8.12. Support for CoreDNS customization A cluster administrator can now configure DNS servers to allow DNS name resolution through the configured servers for the default domain. A DNS forwarding configuration can have both the default servers specified in the /etc/resolv.conf file and the upstream DNS servers. For more information, see Using DNS forwarding . 1.3.8.13. Support for CoreDNS log level and Operator log level This enhancement adds the ability to manually change the log level for an Operator individually or a cluster as a whole. For more information, see Setting the CoreDNS log level 1.3.8.14. Support for configuring the maximum length of the syslog message in the Ingress Controller You can now set the maximum length of the syslog message in the Ingress Controller to any value between 480 and 4096 bytes. For more information, see Ingress Controller configuration parameters . 1.3.8.15. Set CoreDNS forwarding policy You can now set the CoreDNS forwarding policy through the DNS Operator. The default value is Random , and you can also set the value to RoundRobin or Sequential . For more information, see Using DNS forwarding . 1.3.8.16. Open vSwitch hardware offloading support for SR-IOV You can now configure Open vSwitch hardware offloading to increase data processing performance on compatible bare metal nodes. Hardware offloading is a method for processing data that removes data processing tasks from the CPU and transfers them to the dedicated data processing unit of a network interface controller. Benefits of this feature include faster data processing, reduced CPU workloads, and lower computing costs. For more information, see Configuring hardware offloading . 1.3.8.17. Creating DNS records by using the Red Hat External DNS Operator (Technology Preview) You can now create DNS records by using the Red Hat External DNS Operator on cloud providers such as AWS, Azure, and GCP. You can install the External DNS Operator using OperatorHub. You can use parameters to configure ExternalDNS as required. For more information, see Understanding the External DNS Operator . 1.3.8.18. Mutable Ingress Controller endpoint publishing strategy enhancement Cluster administrators can now configure the Ingress Controller endpoint publishing strategy to change the load-balancer scope between Internal and External in OpenShift Container Platform. For more information, see Ingress Controller endpoint publishing strategy . 1.3.8.19. OVS hardware offloading for clusters on RHOSP (Technology Preview) For clusters that run on Red Hat OpenStack Platform (RHOSP), you can enable Open vSwitch (OVS) hardware offloading. For more information, see Enabling OVS hardware offloading . 1.3.8.20. Reduction of RHOSP resources created by Kuryr For clusters that run on RHOSP, Kuryr now only creates Neutron networks and subnets for namespaces that have at least one pod on the pods network. Additionally, pools in a namespace are populated after at least one pod on the pods network is created in the namespace. 1.3.8.21. Support for RHOSP DCN (Technology Preview) You can now deploy a cluster on a Red Hat OpenStack Platform (RHOSP) deployment that uses a distributed compute node (DCN) configuration. This deployment configuration has several limitations: Only RHOSP version 16 is supported. For RHOSP 16.1.4, only hyper-converged infrastructure (HCI) and Ceph technologies are supported at the edge. For RHOSP 16.2, non-HCI and Ceph technologies are supported as well. Networks must be created ahead of time (Bring Your Own Network) as either tenant or provider networks. These networks must be scheduled in the appropriate availability zones. 1.3.8.22. Support for external cloud providers for clusters on RHOSP (Technology Preview) Clusters that run on RHOSP can now use Cloud Provider OpenStack . This capability is available as part of the TechPreviewNoUpgrade feature set. 1.3.8.23. Configuring host network interfaces with NMState on installer-provisioned clusters OpenShift Container Platform now provides a networkConfig configuration setting for installer-provisioned clusters, which takes an NMState YAML configuration to configure host interfaces. During installer-provisioned installations, you can add the networkConfig configuration setting and the NMState YAML configuration to the install-config.yaml file. Additionally, you can add the networkConfig configuration setting and the NMState YAML configuration to the bare metal host resource when using the Bare Metal Operator. The most common use case for the networkConfig configuration setting is to set static IP addresses on a host's network interface during installation or while expanding the cluster. For more information, see Configuring host network interfaces in the install-config.yaml file . 1.3.8.24. Boundary clock and PTP enhancements to linuxptp services You can now specify multiple network interfaces in a PtpConfig profile to allow nodes running RAN vDU applications to serve as a Precision Time Protocol Telecom Boundary Clock (PTP T-BC). Interfaces configured as boundary clocks now also support PTP fast events. For more information, see Configuring linuxptp services as boundary clock . 1.3.8.25. Support for Intel 800-Series Columbiaville NICs Intel 800-Series Columbiaville NICs are now fully supported for interfaces configured as boundary clocks or ordinary clocks. Columbiaville NICs are supported in the following configurations: Ordinary clock Boundary clock synced to the Grandmaster clock Boundary clock with one port synchronizing from an upstream source clock, and three ports providing downstream timing to destination clocks For more information, see Configuring PTP devices . 1.3.8.26. Kubernetes NMState Operator is GA for bare-metal, IBM Power, IBM Z, and LinuxONE installations OpenShift Container Platform now provides the Kubernetes NMState Operator for bare-metal, IBM Power, IBM Z, and LinuxONE installations. The Kubernetes NMState Operator is still a Technology Preview for all other platforms. See About the Kubernetes NMState Operator for additional details. 1.3.8.27. SR-IOV support for Mellanox MT2892 cards SR-IOV support is now available for Mellanox MT2892 cards . 1.3.8.28. Network Observability Operator to observe network traffic flow As an administrator, you can now install the Network Observability Operator to observe the network traffic for your OpenShift Container Platform cluster in the console. You can view and monitor the network traffic data in different graphical representations. The Network Observability Operator uses eBPF technology to create the network flows. The network flows are enriched with OpenShift Container Platform information, and stored in Loki. You can use the network traffic information for detailed troubleshooting and analysis. The Network Observability Operator is General Availability (GA) status in the 4.12 release of OpenShift Container Platform and is also supported in OpenShift Container Platform 4.10. For more information, see Network Observability . 1.3.8.28.1. Network Observability Operator updates The Network Observability Operator releases updates independently from the OpenShift Container Platform minor version release stream. Updates are available through a single, rolling stream which is supported on all currently supported versions of OpenShift Container Platform 4. Information regarding new features, enhancements, and bug fixes for the Network Observability Operator can be found in the Network Observability release notes . 1.3.9. Hardware 1.3.9.1. Enhancements to MetalLB load balancing The following enhancements to MetalLB and the MetalLB Operator are included in this release: Support for Border Gateway Protocol (BGP) is added. Support for Bidirectional Forwarding Detection (BFD) in combination with BGP is added. Support for IPv6 and dual-stack networking is added. Support for specifying a node selector on the speaker pods is added. You can now control which nodes are used for advertising load balancer service IP addresses. This enhancement applies to layer 2 mode and BGP mode. Validating web hooks are added to ensure that address pool and BGP peer custom resources are valid. The v1alpha1 API version for the AddressPool and MetalLB custom resource definitions that were introduced in the 4.9 release are deprecated. Both custom resources are updated to the v1beta1 API version. Support for speaker pod tolerations in the MetalLB custom resource definition is added. For more information, see About MetalLB and the MetalLB Operator . 1.3.9.2. Support for modifying host firmware settings OpenShift Container Platform supports the HostFirmwareSettings and FirmwareSchema resources. When deploying OpenShift Container Platform on bare metal hosts, there are times when you need to make changes to the host either before or after provisioning. This can include inspecting the host's firmware and BIOS details. There are two new resources that you can use with the Bare Metal Operator (BMO): HostFirmwareSettings : You can use the HostFirmwareSettings resource to retrieve and manage the BIOS settings for a host. The resource contains the complete BIOS configuration returned from the baseboard management controller (BMC). Whereas, the firmware field in the BareMetalHost resource returns three vendor-independent fields, the HostFirmwareSettings resource typically comprises many BIOS settings of vendor-specific fields per host model. FirmwareSchema : You can use the FirmwareSchema to identify the host's modifiable BIOS values and limits when making changes to host firmware settings. See Bare metal configuration for additional details. 1.3.10. Storage 1.3.10.1. Storage metrics indicator With this update, workloads can securely share Secrets and ConfigMap objects across namespaces using inline ephemeral csi volumes provided by the Shared Resource CSI Driver. Container Storage Interface (CSI) volumes and the Shared Resource CSI Driver are Technology Preview features. ( BUILD-293 ) 1.3.10.2. Console Storage Plugin enhancement A new feature has been added to the Console Storage Plugin that adds Aria labels throughout the installation flow for screen readers. This provides better accessibility for users that use screen readers to access the console. A new feature has been added that provides metrics indicating the amount of used space on volumes used for persistent volume claims (PVCs). This information appears in the PVC list, and in the PVC details in the Used column. ( BZ#1985965 ) 1.3.10.3. Persistent storage using the Alibaba AliCloud Disk CSI Driver Operator OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AliCloud Disk. The AliCloud Disk Driver Operator that manages this driver is generally available, and enabled by default in OpenShift Container Platform 4.10. For more information, see AliCloud Disk CSI Driver Operator . 1.3.10.4. Persistent storage using the Microsoft Azure File CSI Driver Operator (Technology Preview) OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Azure File. The Azure File Driver Operator that manages this driver is in Technology Preview. For more information, see Azure File CSI Driver Operator . 1.3.10.5. Persistent storage using the IBM VPC Block CSI Driver Operator OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for IBM Virtual Private Cloud (VPC) Block. The IBM VPC Block Driver Operator that manages this driver is generally available, and enabled by default in OpenShift Container Platform 4.10. For more information, see IBM VPC Block CSI Driver Operator . 1.3.10.6. Persistent storage using VMware vSphere CSI Driver Operator is generally available OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for vSphere. This feature was previously introduced as a Technology Preview feature in OpenShift Container Platform 4.8 and is now generally available and enabled by default in OpenShift Container Platform 4.10. For more information, see vSphere CSI Driver Operator . vSphere CSI Driver Operator installation requires: Certain minimum component versions installed. See CSI driver installation on vSphere clusters Removal of any non-Red Hat vSphere CSI driver ( Removing a non-Red Hat vSphere CSI Operator Driver ) Removal of any storage class named thin-csi Clusters are still upgraded even if the preceding conditions are not met, but it is recommended that you meet these conditions to have a supported vSphere CSI Operator Driver. 1.3.10.7. Persistent storage using Microsoft Azure Disk CSI Driver Operator is generally available OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for Azure Disk. This feature was previously introduced as a Technology Preview feature in OpenShift Container Platform 4.8 and is now generally available, and enabled by default in OpenShift Container Platform 4.10. For more information, see Azure Disk CSI Driver Operator . 1.3.10.8. Persistent storage using AWS Elastic File Storage CSI Driver Operator is generally available OpenShift Container Platform is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Storage (EFS). This feature was previously introduced as a Technology Preview feature in OpenShift Container Platform 4.9 and is now generally available in OpenShift Container Platform 4.10. For more information, see AWS EFS CSI Driver Operator . 1.3.10.9. Automatic CSI migration supports Microsoft Azure file (Technology Preview) Starting with OpenShift Container Platform 4.8, automatic migration for in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers became available as a Technology Preview feature. This feature now supports automatic migration for the Azure File in-tree plugin to the Azure File CSI driver. For more information, see CSI automatic migration . 1.3.10.10. Automatic CSI migration supports VMware vSphere (Technology Preview) Starting with OpenShift Container Platform 4.8, automatic migration for in-tree volume plugins to their equivalent Container Storage Interface (CSI) drivers became available as a Technology Preview feature. This feature now supports automatic migration for the vSphere in-tree plugin to the vSphere CSI driver. For more information, see CSI automatic migration . 1.3.10.11. Using fsGroup to reduce pod timeouts If a storage volume contains many files (roughly 1,000,000 or greater), you may experience pod timeouts. OpenShift Container Platform 4.10 introduces the ability to use fsGroup and fsGroupChangePolicy to skip recursive permission change for the storage volume, therefore helping to avoid pod timeout problems. For more information, see Using fsGroup to reduce pod timeouts . 1.3.11. Registry 1.3.12. Operator lifecycle 1.3.12.1. Disabling copied CSVs to support large clusters When an Operator is installed by Operator Lifecycle Manager (OLM), a simplified copy of its cluster service version (CSV) is created in every namespace that the Operator is configured to watch. These CSVs are known as copied CSVs; they identify controllers that are actively reconciling resource events in a given namespace. On large clusters, with namespaces and installed Operators potentially in the hundreds or thousands, copied CSVs can consume an untenable amount of resources, such as OLM's memory usage, cluster etcd limits, and networking bandwidth. To support these larger clusters, cluster administrators can now disable copied CSVs for Operators that are installed with the AllNamespaces mode. For more details, see Configuring Operator Lifecycle Manager features . 1.3.12.2. Generic and complex constraints for dependencies Operators with specific dependency requirements can now use complex constraints or requirement expressions. The new olm.constraint bundle property holds dependency constraint information. A message field allows Operator authors to convey high-level details about why a particular constraint was used. For more details, see Operator Lifecycle Manager dependency resolution . 1.3.12.3. Operator Lifecycle Manager support for Hypershift Operator Lifecycle Manager (OLM) components, including Operator catalogs, can now run entirely on Hypershift-managed control planes. This capability does not incur any cost to tenants on worker nodes. 1.3.12.4. Operator Lifecycle Manager support for ARM Previously, the default Operator catalogs did not support ARM. With this enhancement, Operator Lifecycle Manager (OLM) adds default Operator catalogs to ARM clusters. As a result, the OperatorHub now includes content by default for Operators that support ARM. ( BZ#1996928 ) 1.3.13. Operator development 1.3.13.1. Hybrid Helm Operator (Technology Preview) The standard Helm-based Operator support in the Operator SDK has limited functionality compared to the Go-based and Ansible-based Operator support that has reached the Auto Pilot capability (level V) in the Operator maturity model . Starting in OpenShift Container Platform 4.10 as a Technology Preview feature, the Operator SDK includes the Hybrid Helm Operator to enhance the existing Helm-based support abilities through Go APIs. Operator authors can generate an Operator project beginning with a Helm chart, and then add advanced, event-based logic to the Helm reconciler in Go language. Authors can use Go to continue adding new APIs and custom resource definitions (CRDs) in the same project. For more details, see Operator SDK tutorial for Hybrid Helm Operators . 1.3.13.2. Custom metrics for Ansible-based Operators Operator authors can now use the Ansible-based Operator support in the Operator SDK to expose custom metrics, emit Kubernetes events, and provide better logging. For more details, see Exposing custom metrics for Ansible-based Operators . 1.3.13.3. Object pruning for Go-based Operators The operator-lib pruning utility lets Go-based Operators clean up objects, such as jobs or pods, that can stay in the cluster and use resources. The utility includes common pruning strategies for Go-based Operators. Operator authors can also use the utility to create custom hooks and strategies. For more information about the pruning utility, see Object pruning utility for Go-based Operators . 1.3.13.4. Digest-based bundle for disconnected environments With this enhancement, Operator SDK can now package an Operator project into a bundle that works in a disconnected environment with Operator Lifecycle Manager (OLM). Operator authors can run the make bundle command and set USE_IMAGE_DIGESTS to true to automatically update your Operator image reference to a digest rather than a tag. To use the command, you must use environment variables to replace hard-coded related image references. For more information about developing Operators for disconnected environments, see Enabling your Operator for restricted network environments . 1.3.14. Builds With this update, you can use CSI volumes in OpenShift Builds, which is a Technology Preview feature. This feature relies on the newly introduced Shared Resource CSI Driver and the Insights Operator to import RHEL Simple Content Access (SCA) certificates. For example, by using this feature, you can run entitled builds with SharedSecret objects and install entitled RPM packages during builds rather than copying your RHEL subscription credentials and certificates into the builds' namespaces. ( BUILD-274 ) Important The SharedSecret objects and OpenShift Shared Resources feature are only available if you enable the TechPreviewNoUpgrade feature set. These Technology Preview features are not part of the default features. Enabling this feature set cannot be undone and prevents upgrades. This feature set is not recommended on production clusters. See Enabling Technology Preview features using FeatureGates . With this update, workloads can securely share Secrets and ConfigMap objects across namespaces using inline ephemeral csi volumes provided by the Shared Resource CSI Driver. Container Storage Interface (CSI) volumes and the Shared Resource CSI Driver are Technology Preview features. ( BUILD-293 ) 1.3.15. Jenkins With this update, you can run Jenkins agents as sidecar containers. You can use this capability to run any container image in a Jenkins pipeline that has a correctly configured pod template and Jenkins file. Now, to compile code, you can run two new pod templates named java-build and nodejs-builder as sidecar containers with Jenkins. These two pod templates use the latest Java and NodeJS versions provided by the java and nodejs image streams in the openshift namespace. The non-sidecar maven and nodejs pod templates have been deprecated. ( JKNS-132 ) 1.3.16. Machine API 1.3.16.1. Azure Ephemeral OS disk support With this enhancement, you can create a machine set running on Azure that deploys machines on Ephemeral OS disks. Ephemeral OS disks use local VM capacity rather than remote Azure Storage. For more information, see Machine sets that deploy machines on Ephemeral OS disks . 1.3.16.2. Azure Accelerated Networking support With this release, you can enable Accelerated Networking for Microsoft Azure VMs by using the Machine API. Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide VMs with a more direct path to the switch. For more information, see Accelerated Networking for Microsoft Azure VMs . 1.3.16.3. Global Azure availability set support With this release, you can use availability sets in global Azure regions that do not have multiple availability zones to ensure high availability. 1.3.16.4. GPU support on Google Cloud Platform Google Cloud Platform (GCP) Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. With this release, you can define which supported GPU to use for an instance by using the Machine API. For more information, see Enabling GPU support for a machine set . 1.3.16.5. Cluster autoscaler node utilization threshold With this enhancement, you can specify a node utilization threshold in the ClusterAutoscaler resource definition. This threshold represents the node utilization level below which an unnecessary node is eligible for deletion. For more information, see About the cluster autoscaler . 1.3.17. Machine Config Operator 1.3.17.1. Enhanced configuration drift detection With this enhancement, the Machine Config Daemon (MCD) now checks nodes for configuration drift if a filesystem write event occurs for any of the files specified in the machine config and before a new machine config is applied, in addition to node bootup. Previously, the MCD checked for configuration drift only at node bootup. This change was made because node reboots do not occur frequently enough to avoid the problems caused by configuration drift until an administrator can correct the issue. Configuration drift occurs when the on-disk state of a node differs from what is configured in the machine config. The Machine Config Operator (MCO) uses the MCD to check nodes for configuration drift and, if detected, sets that node and machine config pool (MCP) to degraded . For more information about configuration drift, see Understanding configuration drift detection . 1.3.18. Nodes 1.3.18.1. Linux control groups version 2 (Developer Preview) You can now enable Linux control groups version 2 (cgroups v2) on specific nodes in your cluster. The OpenShift Container Platform process for enabling cgroups v2 disables all cgroups version 1 controllers and hierarchies. The OpenShift Container Platform cgroups version 2 feature is in Developer Preview and is not supported by Red Hat at this time. For more information, see Enabling Linux control groups version 2 (cgroups v2) . 1.3.18.2. Support for swap memory use on nodes (Technology Preview) You can enable swap memory use for OpenShift Container Platform workloads on a per-node basis. For more information, see Enabling swap memory use on nodes . 1.3.18.3. Place nodes into maintenance mode by using the Node Maintenance Operator The Node Maintenance Operator (NMO) cordons off nodes from the rest of the cluster and drains all the pods from the nodes. By placing nodes under maintenance, you can investigate problems with a machine, or perform operations on the underlying machine, that might result in a node failure. This is a standalone version of NMO. If you installed OpenShift Virtualization, then you must use the NMO that is bundled with it. 1.3.18.4. Node Health Check Operator enhancements (Technology Preview) The Node Health Check Operator provides these new enhancements: Support for running in disconnected mode Prevent conflicts with machine health check. For more information, see About how node health checks prevent conflicts with machine health checks 1.3.18.5. Poison Pill Operator enhancements The Poison Pill Operator uses NodeDeletion as its default remediation strategy. The NodeDeletion remediation strategy removes the node object. In OpenShift Container Platform 4.10, the Poison Pill Operator introduces a new remediation strategy called ResourceDeletion . The ResourceDeletion remediation strategy removes the pods and associated volume attachments on the node rather than the node object. This strategy helps to recover workloads faster. 1.3.18.6. Control plane node migration on RHOSP You can now migrate control plane nodes from one RHOSP host to another without encountering a service disruption. 1.3.19. Red Hat OpenShift Logging In OpenShift Container Platform 4.7, Cluster Logging became Red Hat OpenShift Logging . For more information, see Release notes for Red Hat OpenShift Logging . 1.3.20. Monitoring The monitoring stack for this release includes the following new and modified features. 1.3.20.1. Monitoring stack components and dependencies Updates to versions of monitoring stack components and dependencies include the following: Alertmanager to 0.23.0 Grafana to 8.3.4 kube-state-metrics to 2.3.0 node-exporter to 1.3.1 prom-label-proxy to 0.4.0 Prometheus to 2.32.1 Prometheus adapter to 0.9.1 Prometheus operator to 0.53.1 Thanos to 0.23.1 1.3.20.2. New page for metrics targets in the OpenShift Container Platform web console A new Metrics Targets page in the OpenShift Container Platform web console shows targets for default OpenShift Container Platform projects and for user-defined projects. You can use this page to view, search, and filter the endpoints that are currently targeted for scraping, which helps you to identify and troubleshoot problems. 1.3.20.3. Monitoring components updated to use TLS authentication for metrics collection With this release, all monitoring components are now configured to use mutual TLS authentication, rather than Bearer Token static authentication for metrics collection. TLS authentication is more resilient to Kubernetes API outages and decreases the load on the Kubernetes API. 1.3.20.4. Cluster Monitoring Operator updated to use the global TLS security profile With this release, the Cluster Monitoring Operator components now honor the global OpenShift Container Platform tlsSecurityProfile settings. The following components and services now use the TLS security profile: Alertmanager pods (ports 9092 and 9097) kube-state-metrics pod (ports 8443 and 9443) openshift-state-metrics pod (ports 8443 and 9443) node-exporter pods (port 9100) Grafana pod (port 3002) prometheus-adapter pods (port 6443) prometheus-k8s pods (ports 9092 and 10902) Thanos query pods (ports 9092, 9093 and 9094) Prometheus Operator (ports 8080 and 8443) telemeter-client pod (port 8443) If you have enabled user-defined monitoring, the following pods now use the profile: prometheus-user-workload pods (ports 9091 and 10902) prometheus-operator pod (ports 8080 and 8443) 1.3.20.5. Changes to alerting rules New Added a namespace label to all Thanos alerting rules. Added the openshift_io_alert_source="platform" label to all platform alerts. Changed Renamed AggregatedAPIDown to KubeAggregatedAPIDown . Renamed AggregatedAPIErrors to KubeAggregatedAPIErrors . Removed the HighlyAvailableWorkloadIncorrectlySpread alert. Improved the description of the KubeMemoryOvercommit alert. Improved NodeFilesystemSpaceFillingUp alerts to make it consistent with the Kubernetes garbage collection thresholds. Excluded ReadOnlyMany volumes from the KubePersistentVolumeFillingUp alerts. Extended PrometheusOperator alerts to include the Prometheus operator running in the openshift-user-workload-monitoring namespace. Replaced the ThanosSidecarPrometheusDown and ThanosSidecarUnhealthy alerts by ThanosSidecarNoConnectionToStartedPrometheus . Changed the severity of KubeletTooManyPods from warning to info . Enabled exclusion of specific persistent volumes from KubePersistentVolumeFillingUp alerts by adding the alerts.k8s.io/KubePersistentVolumeFillingUp: disabled label to a persistent volume resource. Note Red Hat does not guarantee backward compatibility for recording rules or alerting rules. 1.3.20.6. Changes to metrics Pod-centric cAdvisor metrics available at the slice level have been dropped. The following metrics are now exposed: kube_poddisruptionbudget_labels kube_persistentvolumeclaim_labels kube_persistentvolume_labels Metrics with the name kube_*annotation have been removed from kube-state-metrics . Note Red Hat does not guarantee backward compatibility for metrics. 1.3.20.7. Added hard anti-affinity rules and pod disruption budgets for certain components With this release, hard anti-affinity rules and pod disruption budgets have been enabled for the following monitoring components to reduce downtime during patch upgrades: Alertmanager Note As part of this change, the number of Alertmanager replicas has been reduced from three to two. However, the persistent volume claim (PVC) for the removed third replica is not automatically removed as part of the upgrade process. If you have configured persistent storage for Alertmanager, you can remove this PVC manually from the Cluster Monitoring Operator. See the "Known Issues" section for more information. Prometheus adapter Prometheus Thanos Querier If you have enabled user-defined monitoring, the following components also use these rules and budgets: Prometheus Thanos Ruler 1.3.20.8. Alert routing for user-defined projects (Technology Preview) This release introduces a Technology Preview feature in which an administrator can enable alert routing for user-defined projects monitoring. Users can then add and configure alert routing for their user-defined projects. 1.3.20.9. Alertmanager Access to the third-party Alertmanager web user interface from the OpenShift Container Platform route has been removed. 1.3.20.10. Prometheus OpenShift Container Platform cluster administrators can now configure query logging for Prometheus. Access to the third-party Prometheus web user interface is deprecated and will be removed in a future OpenShift Container Platform release. 1.3.20.11. Prometheus adapter The Prometheus adapter now uses the Thanos Querier API rather than the Prometheus API. OpenShift Container Platform cluster administrators can now configure audit logs for the Prometheus adapter. 1.3.20.12. Thanos Querier Access to the third-party Thanos Querier web user interface from the OpenShift Container Platform route has been removed. The /api/v1/labels , /api/v1/label/*/values , and /api/v1/series endpoints on the Thanos Querier tenancy port are now exposed. OpenShift Container Platform cluster administrators can now configure query logging. If user workload monitoring is enabled, access to the third-party Thanos Ruler web user interface from the OpenShift Container Platform route has been removed. 1.3.20.13. Grafana Access to the third-party Grafana web user interface is deprecated and will be removed in a future OpenShift Container Platform release. 1.3.21. Scalability and performance 1.3.21.1. New Special Resource Operator metrics The Special Resource Operator (SRO) now exposes metrics to help you watch the health of your SRO custom resources and objects. For more information, see Prometheus Special Resource Operator metrics . 1.3.21.2. Special Resource Operator custom resource definition fields Using oc explain for Special Resource Operator (SRO) now provides online documentation for SRO custom resource definitions (CRD). This enhancement provides better specifics for CRD fields. ( BZ#2031875 ) 1.3.21.3. New Node Tuning Operator metric added to Telemetry A Node Tuning Operator (NTO) metric is now added to Telemetry. Follow the procedure in Showing data collected by Telemetry to see all the metrics collected by Telemetry. 1.3.21.4. NFD Topology Updater is now available The Node Feature Discovery (NFD) Topology Updater is a daemon responsible for examining allocated resources on a worker node. It accounts for resources that are available to be allocated to new pod on a per-zone basis, where a zone can be a Non-Uniform Memory Access (NUMA) node. See Using the NFD Topology Updater for more information. 1.3.21.5. Hyperthreading-aware CPU manager policy (Technology Preview) Hyperthreading-aware CPU manager policy in OpenShift Container Platform is now available without the need for extra tuning. The cluster administrator can enable this feature if required. Hyperthreads are abstracted by the hardware as logical processors. Hyperthreading allows a single physical processor to execute two heavyweight threads (processes) at the same time, dynamically sharing processor resources. 1.3.21.6. NUMA-aware scheduling with NUMA Resources Operator (Technology Preview) The default OpenShift Container Platform scheduler does not see individual non-uniform memory access (NUMA) zones in the compute node. This can lead to sub-optimal scheduling of latency-sensitive workloads. A new NUMA Resources Operator is available which deploys a NUMA-aware secondary scheduler. The NUMA-aware secondary scheduler makes scheduling decisions for workloads based on a complete picture of available NUMA zones in the cluster. This ensures that latency-sensitive workloads are processed in a single NUMA zone for maximum efficiency and performance. For more information, see About NUMA-aware scheduling . 1.3.21.7. Filtering custom resources during ZTP spoke cluster installation using SiteConfig filters You can now use filters to customize SiteConfig CRs to include or exclude other CRs for use in the installation phase of the zero touch provisioning (ZTP) GitOps pipeline. For more information, see Filtering custom resources using SiteConfig filters . 1.3.21.8. Disable chronyd in the PolicyGenTemplate CR for vDU use cases On nodes running RAN vDU applications, you must disable chronyd if you update to OpenShift Container Platform 4.10 from earlier versions. To disable chronyd , add the following line in the [service] section under .spec.profile.data of the TunedPerformancePatch.yaml file. The TunedPerformancePatch.yaml file is referenced in the group PolicyGenTemplate CR: [service] service.chronyd=stop,disable For more information, Recommended cluster configurations to run vDU applications . 1.3.22. Backup and restore 1.3.23. Developer experience 1.3.23.1. Pruning deployment replica sets (Technology Preview) This release introduces a Technology Preview flag --replica-sets to the oc adm prune deployments command. By default, only replication controllers are pruned with the oc adm prune deployments command. When you set --replica-sets to true , replica sets are also included in the pruning process. For more information, see Pruning deployment resources . 1.3.24. Insights Operator 1.3.24.1. Importing simple content access certificates In OpenShift Container Platform 4.10, Insights Operator now imports your simple content access certificates from Red Hat OpenShift Cluster Manager by default. For more information, see Importing simple content access certificates with Insights Operator . 1.3.24.2. Insights Operator data collection enhancements To reduce the amount of data sent to Red Hat, Insights Operator only gathers information when certain conditions are met. For example, Insights Operator only gathers the Alertmanager logs when Alertmanager fails to send alert notifications. In OpenShift Container Platform 4.10, the Insights Operator collects the following additional information: (Conditional) The logs from pods where the KubePodCrashlooping and KubePodNotReady alerts are firing (Conditional) The Alertmanager logs when the AlertmanagerClusterFailedToSendAlerts or AlertmanagerFailedToSendAlerts alerts are firing Silenced alerts from Alertmanager The node logs from the journal unit (kubelet) The CostManagementMetricsConfig from clusters with costmanagement-metrics-operator installed The time series database status from the monitoring stack Prometheus instance Additional information about the OpenShift Container Platform scheduler With this additional information, Red Hat improves OpenShift Container Platform functionality and enhances Insights Advisor recommendations. 1.3.25. Authentication and authorization 1.3.25.1. Syncing group membership from OpenID Connect identity providers This release introduces support for synchronizing group membership from an OpenID Connect provider to OpenShift Container Platform upon user login. You can enable this by configuring the groups claim in the OpenShift Container Platform OpenID Connect identity provider configuration. For more information, see Sample OpenID Connect CRs . 1.3.25.2. Additional supported OIDC providers The Okta and Ping Identity OpenID Connect (OIDC) providers are now tested and supported with OpenShift Container Platform. For the full list of OIDC providers, see Supported OIDC providers . 1.3.25.3. oc commands now obtain credentials from Podman configuration locations Previously, oc commands that used the registry configuration, for example oc registry login or oc image commands, obtained credentials from Docker configuration locations. With OpenShift Container Platform 4.10, if a registry entry cannot be found in the default Docker configuration location, oc commands obtain the credentials from Podman configuration locations. You can set your preference to either docker or podman by using the REGISTRY_AUTH_PREFERENCE environment variable to prioritize the location. Users also have the option to use the REGISTRY_AUTH_FILE environment variable, which serves as an alternative to the existing --registry-config CLI flag. The REGISTRY_AUTH_FILE environment variable is also compatible with podman . 1.3.25.4. Support for Google Cloud Platform Workload Identity You can now use the Cloud Credential Operator (CCO) utility ccoctl to configure the CCO to use the Google Cloud Platform Workload Identity. When the CCO is configured to use GCP Workload Identity, the in-cluster components can impersonate IAM service accounts using short-term, limited-privilege security credentials to components. For more information, see Using manual mode with GCP Workload Identity . Note In OpenShift Container Platform 4.10.8, image registry support for using GCP Workload Identity was removed due to the discovery of an adverse impact to the image registry . To use the image registry on an OpenShift Container Platform 4.10.8 cluster that uses Workload Identity, you must configure the image registry to use long-lived credentials instead. With OpenShift Container Platform 4.10.21, support for using GCP Workload Identity with the image registry is restored. For more information about the status of this feature between OpenShift Container Platform 4.10.8 and 4.10.20, see the related Knowledgebase article . 1.4. Notable technical changes OpenShift Container Platform 4.10 introduces the following notable technical changes. TLS X.509 certificates must have a Subject Alternative Name X.509 certificates must have a properly set the Subject Alternative Name field. If you update your cluster without this, you risk breaking your cluster or rendering it inaccessible. In older versions of OpenShift Container Platform, X.509 certificates worked without a Subject Alternative Name, so long as the Common Name field was set. This behavior was removed in OpenShift Container Platform 4.6 . In some cases, certificates without a Subject Alternative Name continued to work in OpenShift Container Platform 4.6, 4.7, 4.8, and 4.9. Because it uses Kubernetes 1.23, OpenShift Container Platform 4.10 does not allow this under any circumstances. Cloud controller managers for additional cloud providers The Kubernetes community plans to deprecate the Kubernetes controller manager in favor of using cloud controller managers to interact with underlying cloud platforms. As a result, there is no plan to add Kubernetes controller manager support for any new cloud platforms. The implementation that is added in this release of OpenShift Container Platform supports using cloud controller managers for Google Cloud Platform (GCP), VMware vSphere, IBM Cloud, and Alibaba Cloud as a Technology Preview . To learn more about the cloud controller manager, see the Kubernetes Cloud Controller Manager documentation . To manage the cloud controller manager and cloud node manager deployments and lifecycles, use the Cluster Cloud Controller Manager Operator. For more information, see the Cluster Cloud Controller Manager Operator entry in the Platform Operators reference . Operator SDK v1.16.0 OpenShift Container Platform 4.10 supports Operator SDK v1.16.0. See Installing the Operator SDK CLI to install or update to this latest version. Note Operator SDK v1.16.0 supports Kubernetes 1.22. Many deprecated v1beta1 APIs were removed in Kubernetes 1.22, including sigs.k8s.io/controller-runtime v0.10.0 and controller-gen v0.7 . This is a breaking change if you need to scaffold v1beta1 APIs for custom resource definitions (CRDs) or webhooks to publish your project into older cluster versions. For more information about changes introduced in Kubernetes 1.22, see Validating bundle manifests for APIs removed from Kubernetes 1.22 and Beta APIs removed from Kubernetes 1.22 in the OpenShift Container Platform 4.9 release notes. If you have any Operator projects that were previously created or maintained with Operator SDK v1.10.1, see Upgrading projects for newer Operator SDK versions to ensure your projects are upgraded to maintain compatibility with Operator SDK v1.16.0. Changed Cluster Autoscaler alert severity Previously, the ClusterAutoscalerUnschedulablePods alert showed a severity of warning , which suggested it required developer intervention. This alert is informational and does not describe a problematic condition that requires intervention. With this release, the ClusterAutoscalerUnschedulablePods alert is reduced in severity from warning to info . ( BZ#2025230 ) Network Observability operator for observing network flows The Network Observability Operator is General Availability (GA) status in the 4.12 release of OpenShift Container Platform and is also supported in OpenShift Container Platform 4.10. For more information, see Network Observability . 1.5. Deprecated and removed features Some features available in releases have been deprecated or removed. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality deprecated and removed within OpenShift Container Platform 4.10, refer to the table below. Additional details for more fine-grained functionality that has been deprecated and removed are listed after the table. In the table, features are marked with the following statuses: GA : General Availability DEP : Deprecated REM : Removed Table 1.1. Deprecated and removed features tracker Feature OCP 4.8 OCP 4.9 OCP 4.10 Package manifest format (Operator Framework) REM REM REM SQLite database format for Operator catalogs GA DEP DEP oc adm catalog build REM REM REM --filter-by-os flag for oc adm catalog mirror REM REM REM v1beta1 CRDs DEP REM REM Docker Registry v1 API DEP REM REM Metering Operator DEP REM REM Scheduler policy DEP DEP REM ImageChangesInProgress condition for Cluster Samples Operator DEP DEP DEP MigrationInProgress condition for Cluster Samples Operator DEP DEP DEP Use of v1 without a group in apiVersion for OpenShift Container Platform resources DEP REM REM Use of dhclient in RHCOS DEP REM REM Cluster Loader DEP DEP REM Bring your own RHEL 7 compute machines DEP DEP REM lastTriggeredImageID field in the BuildConfig spec for Builds DEP REM REM Jenkins Operator DEP DEP REM HPA custom metrics adapter based on Prometheus REM REM REM vSphere 6.7 Update 2 or earlier GA DEP DEP Virtual hardware version 13 GA DEP DEP VMware ESXi 6.7 Update 3 or earlier GA DEP DEP Minting credentials for Microsoft Azure clusters GA GA REM Persistent storage using FlexVolume DEP Non-sidecar pod templates for Jenkins DEP Multicluster console (Technology Preview) REM 1.5.1. Deprecated features 1.5.1.1. IBM POWER8, IBM z13 all models, LinuxONE Emperor, LinuxONE Rockhopper, and x86_64 v1 architectures will be deprecated RHCOS functionality in IBM POWER8, IBM z13 all models, LinuxONE Emperor, LinuxONE Rockhopper, and AMD64 (x86_64) v1 CPU architectures will be deprecated in an upcoming release. Additional details for when support will discontinue for these architectures will be announced in a future release. Note AMD and Intel 64-bit architectures (x86-64-v2) will still be supported. 1.5.1.2. Default Docker configuration location deprecation Previously, oc commands that used a registry configuration would obtain credentials from the Docker configuration location, which was ~/.docker/config.json by default. This has been deprecated and will be replaced by a Podman configuration location in a future version of OpenShift Container Platform. 1.5.1.3. Empty file and stdout support deprecation in oc registry login Support for empty files using the --registry-config and --to flags in oc registry login has been deprecated. Support for - (standard output) has also been deprecated as an argument when using oc registry login . They will be removed in a future version of OpenShift Container Platform. 1.5.1.4. Non-sidecar pod templates for Jenkins deprecation In OpenShift Container Platform 4.10, the non-sidecar maven and nodejs pod templates for Jenkins are deprecated. These pod templates are planned for removal in a future release. Bug fixes and support are provided through the end of that future life cycle, after which no new feature enhancements are made. Instead, with this update, you can run Jenkins agents as sidecar containers. ( JKNS-257 ) 1.5.1.5. Third-party monitoring components user interface deprecation For the following monitoring stack components, access to third-party web user interfaces (UIs) is deprecated and is planned to be removed in a future OpenShift Container Platform release: Grafana Prometheus As an alternative, users can navigate to the Observe section of the OpenShift Container Platform web console to access dashboards and other UIs for platform components. 1.5.1.6. Persistent storage using FlexVolume In OpenShift Container Platform 4.10, persistent storage using FlexVolume is deprecated. This feature is still fully supported, but only important bugs will be fixed. However, it may be removed in a future OpenShift Container Platform release. Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver. 1.5.1.7. RHEL 7 support for the OpenShift CLI (oc) is deprecated Support for using Red Hat Enterprise Linux (RHEL) 7 with the OpenShift CLI ( oc ) is deprecated and will be removed in a future OpenShift Container Platform release. 1.5.2. Removed features OpenShift Container Platform 4.10 removes the Jenkins Operator, which was a Technology Preview feature, from the OperatorHub page in the OpenShift Container Platform web console interface. Bug fixes and support are no longer available. Instead, you can continue to deploy Jenkins on OpenShift Container Platform by using the templates provided by the Samples Operator. Alternatively, you can install the Jenkins Helm Chart from the Developer Catalog by using the Helm page in the Developer perspective of the web console. 1.5.2.1. OpenShift CLI (oc) commands removed The following OpenShift CLI ( oc ) commands were removed with this release: oc adm completion oc adm config oc adm options 1.5.2.2. Scheduler policy removed Support for configuring a scheduler policy has been removed with this release. Use a scheduler profile instead to control how pods are scheduled onto nodes. 1.5.2.3. RHEL 7 support for compute machines removed Support for running Red Hat Enterprise Linux (RHEL) 7 compute machines in OpenShift Container Platform has been removed. If you prefer using RHEL compute machines, they must run on RHEL 8. You cannot upgrade RHEL 7 compute machines to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts must be removed. 1.5.2.4. Third-party monitoring component user interface access removed With this release, you can no longer access third-party web user interfaces (UIs) for the following monitoring stack components: Alertmanager Thanos Querier Thanos Ruler (if user workload monitoring is enabled) Instead, you can navigate to the Observe section of the OpenShift Container Platform web console to access metrics, alerting, and metrics targets UIs for platform components. 1.5.2.5. Support for minting credentials for Microsoft Azure removed Support for using the Cloud Credential Operator (CCO) in mint mode on Microsoft Azure clusters has been removed. This change is due to the planned retirement of the Azure AD Graph API by Microsoft on 30 June 2022 and is being backported to all supported versions of OpenShift Container Platform in z-stream updates. For previously installed Azure clusters that use mint mode, the CCO attempts to update existing secrets. If a secret contains the credentials of previously minted app registration service principals, it is updated with the contents of the secret in kube-system/azure-credentials . This behavior is similar to passthrough mode. For clusters with the credentials mode set to its default value of "" , the updated CCO automatically changes from operating in mint mode to operating in passthrough mode. If your cluster has the credentials mode explicitly set to mint mode ( "Mint" ), you must change the value to "" or "Passthrough" . Note In addition to the Contributor role that is required by mint mode, the modified app registration service principals now require the User Access Administrator role that is used for passthrough mode. While the Azure AD Graph API is still available, the CCO in upgraded versions of OpenShift Container Platform attempts to clean up previously minted app registration service principals. Upgrading your cluster before the Azure AD Graph API is retired might avoid the need to clean up resources manually. If the cluster is upgraded to a version of OpenShift Container Platform that no longer supports mint mode after the Azure AD Graph API is retired, the CCO sets an OrphanedCloudResource condition on the associated CredentialsRequest but does not treat the error as fatal. The condition includes a message similar to unable to clean up App Registration / Service Principal: <app_registration_name> . Cleanup after the Azure AD Graph API is retired requires manual intervention using the Azure CLI tool or the Azure web console to remove any remaining app registration service principals. To clean up resources manually, you must find and delete the affected resources. Using the Azure CLI tool, filter the app registration service principals that use the <app_registration_name> from an OrphanedCloudResource condition message by running the following command: USD az ad app list --filter "displayname eq '<app_registration_name>'" --query '[].objectId' Example output [ "038c2538-7c40-49f5-abe5-f59c59c29244" ] Delete the app registration service principal by running the following command: USD az ad app delete --id 038c2538-7c40-49f5-abe5-f59c59c29244 Note After cleaning up resources manually, the OrphanedCloudResource condition persists because the CCO cannot verify that the resources were cleaned up. 1.6. Bug fixes Bare Metal Hardware Provisioning Previously, using a MAC address to configure a provisioning network interface was unsupported when switching the provisioning network from Disabled to Managed . With this update, a provisioningMacAddresses field is added to the provisioning.metal3.io CRD. Use this field to identify the provisioning network interface using its MAC address rather than its name. ( BZ#2000081 ) Previously, Ironic failed to attach virtual media images for provisioning SuperMicro X11/X12 servers because these models expect a non-standard device string, for example UsbCd , for CD-based virtual media. With this update, provisioning now overrides UsbCd on SuperMicro machines provisioned with CD-based virtual media. ( BZ#2009555 ) Previously, Ironic failed to attach virtual media images on SuperMicro X11/X12 servers due to overly restrictive URl validations on the BMCs of these machines. With this update, the filename parameter has now been removed from the URL if the virtual media image is backed by a local file. As a result, the parameter still passes if the image is backed by an object store. ( BZ#2011626 ) Previously, the curl utility, used by the machine downloader image, did not support classless inter-domain routing (CIDR) with no_proxy . A a result, any CIDR in`noProxy` was ignored when downloading the Red Hat Enterprise Linux CoreOS (RHCOS) image. With this update, proxies are now removed from the environment before calling curl when appropriate. As a result, when downloading the machine image, any CIDR in no_proxy is no longer ignored. ( BZ#1990556 ) Previously, virtual media based deployments of OpenShift Container Platform have been observed to intermittently fail on iDRAC hardware types. This occurred when outstanding Lifecycle Controller jobs clashed with virtual media configuration requests. With this update, virtual media deployment failure has been fixed by purging any Lifecycle Controller job while registering iDRAC hardware prior to deployment. ( BZ#1988879 ) Previously, users had to enter a long form of an IPv6 address in the installation configuration file, for example 2001:0db8:85a3:0000:0000:8a2e:0370:7334 . Ironic could not find an interface matching this IP address causing the installation to fail. With this update, the IPv6 address supplied by the user is converted to a short form address, for example, 2001:db8:85a3::8a2e:370:7334 . As a result, installation is now successful. ( BZ#2010698 ) Before this update, when a Redfish system features a Settings URI, the Ironic provisioning service always attempts to use this URI to make changes to boot-related BIOS settings. However, bare-metal provisioning fails if the Baseboard Management Controller (BMC) features a Settings URI but does not support changing a particular BIOS setting by using this Settings URI. In OpenShift Container Platform 4.10 and later, if a system features a Settings URI, Ironic verifies that it can change a particular BIOS setting by using the Settings URI before proceeding. Otherwise, Ironic implements the change by using the System URI. This additional logic ensures that Ironic can apply boot-related BIOS setting changes and bare-metal provisioning can succeed. ( OCPBUGS-6886 ) Builds Before this update, if you created a build configuration containing an image change trigger in OpenShift Container Platform 4.7.x or earlier, the image change trigger might trigger builds continuously. This issue happened because, with the deprecation and removal of the lastTriggeredImageID field from the BuildConfig spec for Builds, the image change trigger controller stopped checking that field before instantiating builds. OpenShift Container Platform 4.8 introduced new fields in the status that the image change trigger controller needed to check, but did not. With this update, the image change trigger controller continuously checks the correct fields in the spec and status for the last triggered image ID. Now, it only triggers a build when necessary. ( BZ#2004203 ) Before this update, image references in Builds needed to specify the Red Hat registry name explicitly. With this update, if an image reference does not contain the registry, the Build searches the Red Hat registries and other well-known registries to locate the image. ( BZ#2011293 ) Jenkins Before this update, version 1.0.48 of the OpenShift Jenkins Sync Plugin introduced a NullPointerException error when Jenkins notified the plugin of new jobs that were not associated with an OpenShift Jenkins Pipeline Build Strategy Build Config. Ultimately, this error was benign because there was no BuildConfig object to associate with the incoming Jenkins Job. Core Jenkins ignored the exception in our plugin and moved on to the listener. However, a long stack trace showed up in the Jenkins log that distracted users. With this update, the plugin resolves the issue by making the proper checks to avoid this error and the subsequent stack trace. ( BZ#2030692 ) Before this update, performance improvements in version 1.0.48 of the OpenShift Sync Jenkins plugin incorrectly specified the labels accepted for ConfigMap and ImageStream objects intended to map into the Jenkins Kubernetes plugin pod templates. As a result, the plugin no longer imported pod templates from ConfigMap and ImageStream objects with a jenkins-agent label. This update corrects the accepted label specification so that the plugin imports pod templates from ConfigMap and ImageStream objects that have the jenkins-agent label. ( 2034839 ) Cloud Compute Previously, editing a machine specification on Red Hat OpenStack Platform (RHOSP) would cause OpenShift Container Platform to attempt to delete and recreate the machine. As a result, this caused an unrecoverable loss of the node it was hosting. With this fix, any edits made to the machine specification after creation are ignored. ( BZ#1962066 ) Previously, on clusters that run on Red Hat OpenStack Platform (RHOSP), floating IP addresses were not reported for machine objects. As a result, certificate signing requests (CSRs) that the kubelet created were not accepted, preventing nodes from joining the cluster. All IP addresses are now reported for machine objects. ( BZ#2022627 ) Previously, the check to ensure that the AWS machine was not updated before requeueing was removed. Consequently, problems arose when the AWS machine's virtual machine had been removed, but its object was still available. If this happens, the AWS machine would requeue in an infinite loop and could not be deleted or updated. This update restores the check that was used to ensure that the AWS machine was not updated before requeueing. As a result, machines no longer requeue if they have been updated. ( BZ#2007802 ) Previously, modifying a selector changed the list of machines that a machine set observed. As a result, leaks could occur because the machine set lost track of machines it had already created. This update ensures that the selector is immutable once created. As a result, machine sets now lists the correct machines. ( BZ#2005052 ) Previously, if a virtual machine template had snapshots, an incorrect disk size was picked due to an incorrect usage of the linkedClone operation. With this update, the default clone operation is changed to fullClone for all situations. linkedClone must now be specified by the user. ( BZ#2001008 ) Previously, the custom resource definition (CRD) schema requirements did not allow numeric values. Consequently, marshaling errors occurred during upgrades. This update corrects the schema requirements to allow both string and numeric values. As a result, marshaling errors are no longer reported by the API server conversion. ( BZ#1999425 ) Previously, if the Machine API Operator was moved, or the pods were deployed as a result of a name change, the MachineNotYetDeleted metric would reset for each monitored machine. This update changes the metric query to ignore the source pod label. As a result, the MachineNotYetDeleted metric now properly alerts in scenarios where the Machine API Operator pod has been renamed. ( BZ#1986237 ) Previously, egress IPs on vSphere were picked up by the vSphere cloud provider within the kubelet. These were unexpected by the certificate signing requests (CSR) approver. Consequently, nodes with egress IPs would not have their CSR renewals approved. This update allows the CSR approver to account for egress IPs. As a result, nodes with egress IPs on vSphere SDN clusters now continue to function and have valid CSR renewals. ( BZ#1860774 ) Previously, worker nodes failed to start, and the installation program failed to generate URL images due to the broken path defaulting for the disk image and incompatible changes in the Google Cloud Platform (GCP) SDK. As a result, the machine controller was unable to create machines. This fix repairs the URL images by changing the base path in the GCP SDK. ( BZ#2009111 ) Previously, the machine would freeze during the deletion process due to a lag in the vCenter's powerOff task. VMware showed the machine to be powered off, but OpenShift Container Platform reported it to be powered on, which resulted in the machine freezing during the deletion process. This update improves the powerOff task handling on vSphere to be checked before the task to delete from the database is created, which prevents the machine from freezing during the deletion process. ( BZ#2011668 ) After installing or updating OpenShift Container Platform, the value of the metrics showed one pending CSR after the last CSR was reconciled. This resulted in the metrics reporting one pending CSR when there should be no pending CSRs. This fix ensures the pending CSR count is always valid post-approval by updating the metrics at the end of each reconcile loop. ( BZ#2013528 ) Previously, AWS checked for credentials when the cloud-provider flag was set to empty string. The credentials were checked by making calls to the metadata service, even on non-AWS platforms. This caused latency in the ECR provider startup and AWS credential errors logged in all platforms, including non-AWS. This fix prevents the credentials check from making any requests to the metadata service to ensure that credential errors are no longer being logged. ( BZ#2015515 ) Previously, the Machine API sometimes reconciled a machine before AWS had communicated VM creation across its API. As a result, AWS reported the VM does not exist and the Machine API considered it failed. With this release, the Machine API waits until the AWS API has synched before trying to mark the machine as provisioned. ( BZ#2025767 ) Previously, a large volume of nodes created simultaneously on UPI clusters could lead to a large number of CSRs being generated. As a result, certificate renewals were not automated because the approver stops approving certificates when there are over 100 pending certificate requests. With this release, existing nodes are accounted for when calculating the approval cut-off and UPI clusters can now benefit from automated certificate renewal even with large scale refresh requests. ( BZ#2028019 ) Previously, the generated list of instance types embedded in Machine API controllers was out of date. Some of these instance types were unknown and could not be annotated for scale-from-zero requirements. With this release, the generated list is updated to include support for newer instance types. ( BZ#2040376 ) Previously, AWS Machine API controllers did not set the IOPS value for block devices other than the IO1 type, causing IOPS fields for GP3 block devices to be ignored. With this release, the IOPS is set on all supported block device types and users can set IOPS for block devices that are attached to the machine. ( BZ#2040504 ) Cloud Credential Operator Previously, when using the Cloud Credential Operator in manual mode on an Azure cluster, the Upgradeable status was not set to False . This behavior was different for other platforms. With this release, Azure clusters using the Cloud Credential Operator in manual mode have the Upgradeable status set to False . ( BZ#1976674 ) Previously, the now unnecessary controller-manager-service service resource that was created by the Cloud Credential Operator was still present. With this release, the Cluster Version Operator cleans it up. ( BZ#1977319 ) Previously, changes to the log level setting for the Cloud Credential Operator in the CredentialsRequest custom resource were ignored. With this release, logging verbosity can be controlled by editing the CredentialsRequest custom resource. ( BZ#1991770 ) Previously, the Cloud Credential Operator (CCO) pod restarted with a continuous error when AWS was the default secret annotator for Red Hat OpenStack Platform (RHOSP). This update fixes the default setting for the CCO pod and prevents the CCO pod from failing. ( BZ#1996624 ) Cluster Version Operator Previously, a pod might fail to start due to an invalid mount request that was not a part of the manifest. With this update, the Cluster Version Operator (CVO) removes any volumes and volume mounts from in-cluster resources that are not included in the manifest. This allows pods to start successfully. ( BZ#2002834 ) Previously, when monitoring certificates were rotated, the Cluster Version Operator (CVO) would log errors and monitoring would be unable to query metrics until the CVO pod was manually restarted. With this update, the CVO monitors the certificate files and automatically recreates the metrics connection whenever the certificate files change. ( BZ#2027342 ) Console Storage Plugin Previously, a loading prompt was not present while the persistent volumes (PVs) were being provisioned and the capacity was 0 TiB which created a confusing scenario. With this update, a loader is added for the loading state which provides details to the user if the PVs are still being provisioned or capacity is to be determined. It will also inform the user of any errors in the process. ( BZ#1928285 ) Previously, the grammar was not correct in certain places and there were instances where translators were unable to interpret the context. This had a negative impact on readability. With this update, the grammar in various places is corrected, the storage classes for translators is itemized, and the overall readability is improved. ( BZ#1961391 ) Previously, when pressing a pool inside the block pools page, the final Ready phase persisted after deletion. Consequently, the pool was in the Ready state even after deletion. This update redirects users to the Pools page and refreshes the pools after detention. ( BZ#1981396 ) Domain Name System (DNS) Previously, the DNS Operator did not cache responses from upstream resolvers that were configured using spec.servers . With this update, the DNS Operator now caches responses from all upstream servers. ( BZ#2006803 ) Previously, the DNS Operator did not enable the prometheus pug-in in the server blocks for custom upstream resolvers. Consequently, CoreDNS did not report metrics for upstream resolvers and only reported metrics for the default server block. With this update, the DNS Operator was changed to enable the prometheus plugin in all server blocks. CoreDNS now reports Prometheus metrics for custom upstream resolvers. ( BZ#2020489 ) Previously, an upstream DNS that provided a response greater than 512 characters caused an application to fail. This occurred because it could not clone the repository from GitHub because to DNS could not be resolved. With this update, the bufsize for KNI CoreDNS is set to 521 to avoid name resolutions from GitHub. ( BZ#1991067 ) When the DNS Operator reconciles its operands, the Operator gets the cluster DNS service object from the API to determine whether the Operator needs to create or update the service. If the service already exists, the Operator compares it with what the Operator expects to get to determine whether an update is needed. Kubernetes 1.22, on which OpenShift Container Platform 4.9 is based, introduced a new spec.internalTrafficPolicy API field for services. The Operator leaves this field empty when it creates the service, but the API sets a default value for this field. The Operator was observing this default value and trying to update the field back to the empty value. This caused the Operator's update logic to continuously try to revert the default value that the API set for the service's internal traffic policy. With this fix, when comparing services to determine whether an update is required, the Operator now treats the empty value and default value for spec.internalTrafficPolicy as equal. As a result, the Operator no longer spuriously tries to update the cluster DNS service when the API sets a default value for the service's spec.internalTrafficPolicy field . ( BZ#2002461 ) Previously, the DNS Operator did not enable the cache plugin for server blocks in the CoreDNS Corefile configuration map corresponding to entries in the spec.servers field of the dnses.operator.openshift.io/default object. As a result, CoreDNS did not cache responses from upstream resolvers that were configured using spec.servers . With this bug fix, the DNS Operator is changed to enable the cache plugin for all server blocks, using the same parameters that the Operator already configured for the default server block. CoreDNS now caches responses from all upstream resolvers. ( BZ#2006803 ) Image Registry Previously, the registry internally resolved \docker.io references into \registry-1.docker.io and used it to store credentials. As a result, credentials for \docker.io images could not be located. With this update, the \registry-1.docker.io hostname has been changed back to \docker.io when searching for credentials. As a result, the registry can correctly find credentials for \docker.io images . ( BZ#2024859 ) Previously, the image pruner job did not retry upon failure. As a result, a single failure could degrade the Image Registry Operator until the time it ran. With this fix, the temporary problems with the pruner do not degrade the Image Registry Operator. ( BZ#2051692 ) Previously, the Image Registry Operator was modifying objects from the informer. As a result, these objects could be concurrently modified by the informer and cause race conditions. With this fix, controllers and informers have different copies of the object and do not have race conditions. ( BZ#2028030 ) Previously, TestAWSFinalizerDeleteS3Bucket would fail because of an issue with the location of the configuration object in the Image Registry Operator. This update ensures that the configuration object is stored in the correct location. As a result, the Image Registry Operator no longer panics when running TestAWSFinalizerDeleteS3Bucket . ( BZ#2048443 ) Previously, error handling caused the access denied error to be output as authentication required . This bug resulted in incorrect error logs. Through Docker distribution error handling, the error output was changed from authentication required to access denied . Now the access denied error provides more precise error logs. ( BZ#1902456 ) Previously, the registry was immediately exiting on a shut down request. As a result, the router did not have time to discover that the registry pod was gone and could send requests to it. With this fix, when the pod is deleted it stays active for a few extra seconds to give other components time to discover its deletion. Now, the router does not send requests to non-existent pods during upgrades, which no longer leads to disruptions. ( BZ#1972827 ) Previously, the registry proxied response from the first available mirrored registry. When a mirror registry was available but did not have the requested data, pull-through did not try to use other mirrors even if they contained the required data. With this fix, pull-through tries other mirror registries if the first mirror replied with Not Found . Now, pull-through can discover data if it exists on any mirror registry. ( BZ#2008539 ) Image Streams Previously, the image policy admission plugin did not recognize deployment configurations, notably that stateful sets could be updated. As a result, image stream references stayed unresolved in deployment configurations when the resolve-names annotation was used. Now, the plugin is updated so that it resolves annotations in deployment configurations and stateful sets. As a result, image stream tags get resolved in created and edited deployment configurations. ( BZ#2000216 ) Previously, when global pull secrets were updated, existing API server pod pull secrets were not updated. Now, the mount point for the pull secret is changed from the /var/lib/kubelet/config.json file to the /var/lib/kubelet directory. As a result, the updated pull secret now appears in existing API server pods. ( BZ#1984592 ) Previously, the image admission plugin did not check annotations inside deployment configuration templates. As a result, annotations inside deployment configuration templates could not be handled in replica controllers, and they were ignored. Now, the image admission plugin analyzes the template of deployment configurations. With this fix, the image admission plugin recognizes the annotations on the deployment configurations and on their templates. ( BZ#2032589 ) Installer The OpenShift Container Platform Baremetal IPI installer previously used the first nodes defined under hosts in install-config as control plane nodes rather than filtering for the hosts with the master role. The role of master and worker nodes is now recognized when defined. ( BZ#2003113 ) Before this update, it was possible to set host bits in the provisioning network CIDR. This could cause the provisioning IP to differ from what was expected leading to conflict with other IP addresses on the provisioning network. With this update, validation ensures that the provisioning network CIDR cannot contain host bits. If a provisioning Network CIDR includes host bits, the installation program stops and logs an error message. ( BZ#2006291 ) Previously, pre-flight checks did not account for Red Hat OpenStack Platform (RHOSP) resource utilization. As a result, those checks failed with an incorrect error message when utilization, rather than quota, impeded installation. Pre-flight checks now process both RHOSP quota and utilization. The checks fail with correct error messages if the quota is sufficient but resources are not. ( BZ#2001317 ) Before this update, the oVirt Driver could specify ReadOnlyMany (ROX) and ReadWriteMany (RWX) access modes when creating a PVC from a configuration file. This caused an error because the driver does not support shared disks and, as a result, could not support these access modes. With this update, the access mode has been limited to single node access. The system prevents any attempt to specify ROX or RWX when creating PVC and logs an error message. ( BZ#1882983 ) Previously, disk uploads in the Terraform provider were not handled properly. As a result, the OpenShift Container Platform installation program failed. With this update, disk upload handling has been fixed, and disk uploads succeed. ( BZ#1917893 ) Previously, when installing a Microsoft Azure cluster with a special size, the installation program would check if the total number of virtual CPUs (vCPU) met the minimum resource requirement to deploy the cluster. Consequently, this could cause an install error. This update changes the check the installation program makes from the total number of vCPUs to the number of vCPUs available. As a result, a concise error message is given that lets the Operator know that the virtual machine size does not meet the minimum resource requirements. ( BZ#2025788 ) Previously, RAM validation for Red Hat OpenStack Platform (RHOSP) checked for values using a wrong unit, and as a result the validation accepted flavors that did not meet minimum RAM requirements. With this fix, RAM validation now rejects flavors with insufficient RAM. ( BZ#2009699 ) Previously, OpenShift Container Platform control plane nodes were missing Ingress security group rules when they were schedulable and deployed on Red Hat OpenStack Platform (RHOSP). As a result, OpenShift Container Platform deployments on RHOSP failed for compact clusters with no dedicated workers. This fix adds Ingress security group rules on Red Hat OpenStack Platform (RHOSP) when control plane nodes are schedulable. Now, you can deploy compact three-node clusters on RHOSP. ( BZ#1955544 ) Previously, if you specified an invalid AWS region, the installation program continued to try and validate availability zones. This caused the installation program to become unresponsive for 60 minutes before timing out. The installation program now verifies the AWS region and service endpoints before availability zones, which reduces the amount of time the installation program takes to report the error. ( BZ#2019977 ) Previously, you could not install a cluster to VMware vSphere if the vCenter hostname began with a number. The installation program has been updated and no longer treats this type of hostname as invalid. Now, a cluster deploys successfully when the vCenter hostname begins with a number. ( BZ#2021607 ) Previously, if you specified a custom disk instance type when deploying a cluster on Microsoft Azure, the cluster might not deploy. This occurred because the installation program incorrectly determined that the minimum resource requirements had been met. The installation program has been updated, and now reports an error when the number of vcpus available for the instance type in the selected region does not meet the minimum resource requirements. ( BZ#2025788 ) Previously, if you defined custom IAM roles when deploying an AWS cluster, you might have to manually remove bootstrap instance profiles after uninstalling the cluster. Intermittently, the installation program did not remove bootstrap instance profiles. The installation program has been updated, and all machine instance profiles are removed when the cluster is uninstalled. ( BZ#2028695 ) Previously, the default provisioningIP value was different when the host bits were set in the provisioning network CIDR. This resulted in a different value for the provisioningIP than expected. This difference caused a conflict with the other IP addresses on the provisioning network. This fix adds a validation to ensure that the ProvisioningNetworkCIDR does not have host bits set. As a result, if the ProvisioningNetworkCIDR has the host bits set, the installation program will stop and report the validation error. ( BZ#2006291 ) Previously, the BMC driver IPMI was not supported for a secure UEFI boot. This resulted in an unsuccessful boot. This fix adds a validation check to ensure that UEFISecureBoot mode is not used with bare-metal drivers. As a result, a secure UEFI boot is successful. ( BZ#2011893 ) With this update, the 4.8 UPI template is updated from version 3.1.0 to 3.2.0 to match the Ignition version. ( BZ#1949672 ) Previously, when asked to mirror the contents of the base registry, the OpenShift Container Platform installation program would exit with a validation error, citing incorrect install-config file values for imageContentSources . With this update, the installation program now allows imageContentSources values to specify base registry names and the installation program no longer exits when specifying a base registry name. ( BZ#1960378 ) Previously, the UPI ARM templates were attaching an SSH key to the virtual machine (VM) instances created. As a result, the creation of the VMs fails when the SSH key provided by the user is the ed25519 type. With this update, the creation of the VMs succeeds regardless of the type of the SSH key provided by the user. ( BZ#1968364 ) After successfully creating a aws_vpc_dhcp_options_association resource, AWS might still report that the resource does not exist. Consequently, the AWS Terraform provider fails the installation. With this update, you can retry the query of the aws_vpc_dhcp_options_association resource for a period of time after creation until AWS reports that the resource exists. As a result, installations succeed despite AWS reporting that the aws_vpc_dhcp_options_association resource does not exist. ( BZ#2032521 ) Previously, when installing OpenShift Container Platform on AWS with local zones enabled, the installation program could create some resources on a local zone rather than an availability zone. This caused the installation program to fail because load balancers cannot run on local zones. With this fix, the installation program ignores local zones and only considers availability zones when installing cluster components. ( BZ#1997059 ) Previously, terraform could attempt to upload the bootstrap ignition configuration file to Azure before it had finished creating the configuration file. If the upload started before the local file was created, the installation would fail. With this fix, terraform uploads the ignition configuration file directly to Azure rather than creating a local copy first. ( BZ#2004313 ) Previously, a race condition could occur if the cluster-bootstrap and Cluster Version Operator components attempted to write a manifest for the same resource to the Kubernetes API at the same time. This could result in the Authentication resource being overwritten by a default copy, which removed any customizations made to that resource. With this fix, the Cluster Version Operator has been blocked from overwriting the manifests that come from the installation program. This prevents any user-specified customizations to the Authentication resource from being overwritten. ( BZ#2008119 ) Previously, when installing OpenShift Container Platform on AWS, the installation program created the bootstrap machine using the m5.large instance type. This caused the installation to fail in regions where that instance type is not available. With this fix, the bootstrap machine uses the same instance type as the control plane machines. ( BZ#2016955 ) Previously, when installing OpenShift Container Platform on AWS, the installation program did not recognize EC2 G and Intel Virtualization Technology (VT) instances and defaulted them to X instances. This caused incorrect instance quotas to be applied to these instances. With this fix, the installation program recognizes EC2 G and VT instances and applies the correct instance quotas. ( BZ#2017874 ) Kubernetes API server Kubernetes Scheduler Before this update, upgrading to the current release did not set the correct weights for the TaintandToleration , NodeAffinity , and InterPodAffinity parameters. This update resolves the issue so that upgrading correctly sets the weights for TaintandToleration to 3 , NodeAffinity to 2 , and InterPodAffinity to 2 . ( BZ#2039414 ) In OpenShift Container Platform 4.10, code for serving insecure metrics is removed from the kube-scheduler code base. Now, metrics are served only through a secure server. Bug fixes and support are provided through the end of a future life cycle. After which, no new feature enhancements are made. ( BZ#1889488 ) Machine Config Operator Previously, the Machine Config Operator (MCO) stored pending configuration changes to the disk before operating system (OS) changes were applied. As a result, in situations such as power loss, the MCO assumed OS changes had already been applied on restart, and validation skipped over changes such as kargs and kernel-rt . With this update, configuration changes to disk are stored after OS changes are applied. Now, if power is lost during the configuration application, the MCO knows it must reattempt the configuration application on restart. ( BZ#1916169 ) Previously, an old version of the Kubernetes client library in the baremetal-runtimecfg project prevented the timely closing of client connections following a VIP failover. This could result in long delays for monitor containers that rely on the API. This update allows the timely closing of client connections following a VIP failover. ( BZ#1995021 ) Previously, when updating SSH keys, the Machine Config Operator (MCO) changed the owner and group of the authorized_keys file to root . This update ensures that the MCO preserves core as the owner and group when it updates the authorized_keys file. ( BZ#1956739 ) Previously, a warning message sent by the clone_slave_connection function was incorrectly stored in a new_uuid variable, which is intended to store only the connection's UUID. As a result, nmcli commands that include the new_uuid variable were failing due to the incorrect value being stored in the new_uuid variable. With this fix, the clone_slave_connection function warning message is redirected to stderr . Now, nmcli commands that reference the new_uuid variable do not fail. ( BZ#2022646 ) Previously, an old version of the Kubernetes client library was present in the baremetal-runtimecfg project. When a Virtual IP (VIP) failed, the client connections might not be closed in a timely manner. This could result in long delays for monitor containers that rely on talking to the API. This fix updates the client library. Now, connections are closed as expected on VIP failovers and the monitor containers do not hang for an excessive period of time. ( BZ#1995021 ) Before this update, the Machine Config Operator (MCO) stored pending configuration changes to disk before it applied them to Red Hat Enterprise Linux CoreOS (RHCOS). If a power loss interrupted the MCO from applying the configuration, it treated the configuration as applied and did not validate the changes. If this configuration contained invalid changes, applying them failed. With this update, the MCO saves a configuration to disk only after being applied. This way, if the power is lost while the MCO is applying the configuration, it reapplies the configuration after it restarts. ( BZ#1916169 ) Before this update, when you used the Machine Config Operator (MCO) to create or update an SSH key, it set the owner and group of the authorized_keys file to root . This update resolves the issue. When the MCO creates or updates the authorized_keys file, it correctly sets or preserves core as the owner and group of the file. ( BZ#1956739 ) Previously, in clusters that use Stateless Address AutoConfiguration (SLAAC), the Ironic addr-gen-mode parameter was not being persisted to the OVNKubernetes bridge. As a result, the IPv6 addresses could change when the bridge was created. This broke the cluster because node IP changes are unsupported. This fix persists the addr-gen-mode parameter when creating the bridge. The IP address is now consistent throughout the deployment process. ( BZ#1990625 ) Previously, if a machine config included a compressed file with the spec.config.storage.files.contents.compression parameter set to gzip , the Machine Config Daemon (MCD) incorrectly wrote the compressed file to disk without extracting it. With this fix, the MCD now extracts a compressed file when the compression parameter is set to gzip . ( BZ#1970218 ) Previously, systemd units were cleaned up only when completely removed. As a result, systemd units could not be unmasked by using a machine config because the masks were not being removed unless the systemd unit was completely removed. With this fix, when you configure a systemd unit as mask: true in a machine config, any existing masks are removed. As a result, systemd units can now be unmasked. ( BZ#1966445 ) Management Console Previously, the OperatorHub category and card links did not include valid href attributes. Consequently, the OperatorHub category and card links could not be opened in a new tab. This update adds valid href attributes to the OperatorHub category and card links. As a result, the OperatorHub and its card links can be opened in new tabs. ( BZ#2013127 ) Previously, on the Operand Details page, a special case was created where the conditions table for the status.conditions property always rendered before all other tables. Consequently, the status.conditions table did not follow the ordering rules of descriptors, which caused unexpected behavior when users tried to change the order of the tables. This update removes the special case for status.conditions and only defaults to rendering it first if no descriptor is defined for that property. As a result, the table for status.condition is rendered according to descriptor ordering rules when a descriptor is defined on that property. ( BZ#2014488 ) Previously, the Resource Details page metrics tab was exceeding the cluster-scoped Thanos endpoint. Consequently, users without authorization for this endpoint would receive a 401 response for all queries. With this update, the Thanos tenancy endpoints are updated, and redundant namespace query arguments are removed. As a result, users with the correct role-based access control (RBAC) permissions can now see data in the metrics tab of the Resource Details page. ( BZ#2015806 ) Previously, when an Operator added an API to an existing API group, it did not trigger API discovery. Consequently, new APIs were not seen by the front end until the page was refreshed. This update makes APIs added by Operators viewable by the front end without a page refresh. ( BZ#1815189 ) Previously, in the Red Hat OpenShift Cluster Manager for Red Hat OpenStack Platform (RHOSP), the control plane was not translated into simplified Chinese. As a result, naming differed from OpenShift Container Platform documentation. This update fixes the translation issue in the Red Hat OpenShift Cluster Manager. ( BZ#1982063 ) Previously, filtering of virtual tables in the Red Hat OpenShift Cluster Manager was broken. Consequently, all of the available nodes would not appear following a search. This update includes new virtual table logic that fixes the filtering issue in the Red Hat OpenShift Cluster Manager. ( BZ#1990255 ) Monitoring Previously, during OpenShift Container Platform upgrades, the Prometheus service could become unavailable because either two Prometheus pods were located on the same node or the two nodes running the pods rebooted during the same interval. This situation was possible because the Prometheus pods had soft anti-affinity rules regarding node placement and no PodDisruptionBudget resources provisioned. Consequently, metrics were not collected and rules were not evaluated over a period of time. To fix this issue, the Cluster Monitoring Operator (CMO) now configures hard anti-affinity rules to ensure that the two Prometheus pods are scheduled on different nodes. The CMO also provisions PodDisruptionBudget resources to ensure that at least one Prometheus pod is always running. As a result, during upgrades, the nodes now reboot in sequence to ensure that at least one Prometheus pod is always running. ( BZ#1933847 ) Previously, the Thanos Ruler service would become unavailable when the node that contains the two Thanos Ruler pods experienced an outage. This situation occurred because the Thanos Ruler pods had only soft anti-affinity rules regarding node placement. Consequently, user-defined rules would not be evaluated until the node came back online. With this release, the Cluster Monitoring Operator (CMO) now configures hard anti-affinity rules to ensure that the two Thanos Ruler pods are scheduled on different nodes. As a result, a single-node outage no longer creates a gap in user-defined rule evaluation. ( BZ#1955490 ) Previously, the Prometheus service would become unavailable when the two Prometheus pods were located on the same node and that node experienced an outage. This situation occurred because the Prometheus pods had only soft anti-affinity rules regarding node placement. Consequently, metrics would not be collected, and rules would not be evaluated until the node came back online. With this release, the Cluster Monitoring Operator configures hard anti-affinity rules to ensure that the two Prometheus pods are scheduled on different nodes. As a result, Prometheus pods are now scheduled on different nodes, and a single node outage no longer creates a gap in monitoring.( BZ#1949262 ) Previously, during OpenShift Container Platform patch upgrades, the Alertmanager service might become unavailable because either the three Alertmanager pods were located on the same node or the nodes running the Alertmanager pods happened to reboot at the same time. This situation was possible because the Alertmanager pods had soft anti-affinity rules regarding node placement and no PodDisruptionBudget provisioned. This release enables hard anti-affinity rules and PodDisruptionBudget resources to ensure no downtime during patch upgrades for the Alertmanager and other monitoring components. ( BZ#1955489 ) Previously, a false positive NodeFilesystemSpaceFillingUp alert was triggered when the file system space was occupied by many Docker images. For this release, the threshold to fire the NodeFilesystemSpaceFillingUp warning alert is now reduced to 20% space available, rather than 40%, which stops the false positive alert from firing. ( BZ#1987263 ) Previously, alerts for the Prometheus Operator component did not apply to the Prometheus Operator that runs the openshift-user-workload-monitoring namespace when user-defined monitoring is enabled. Consequently, no alerts fired when the Prometheus Operator that manages the openshift-user-workload-monitoring namespace encountered issues. With this release, alerts have been modified to monitor both the openshift-monitoring and openshift-user-workload-monitoring namespaces. As a result, cluster administrators receive alert notifications when the Prometheus Operator that manages user-defined monitoring encounters issues. ( BZ#2001566 ) Previously, if the number of DaemonSet pods for the node-exporter agent was not equal to the number of nodes in the cluster, the Cluster Monitoring Operator (CMO) would report a condition of degraded . This issue would occur when one of the nodes was not in the ready condition. This release now verifies that the number of DaemonSet pods for the node-exporter agent is not less than the number of ready nodes in the cluster. This process ensures that a node-exporter pod is running on every active node. As a result, the CMO will not report a degraded condition if one of the nodes is not in a ready state. ( BZ#2004051 ) This release fixes an issue in which some pods in the monitoring stack would start before TLS certificate-related resources were present, which resulted in failures and restarts. ( BZ#2016352 ) Previously, if reporting metrics failed due to reaching the configured sample limit, the metrics target would still appear with a status of Up in the web console UI even though the metrics were missing. With this release, Prometheus bypasses the sample limit setting for reporting metrics, and the metrics now appear regardless of the sample limit setting. ( BZ#2034192 ) Networking When using the OVN-Kubernetes network provider in OpenShift Container Platform versions prior to 4.8, the node routing table was used for routing decisions. In newer versions of OpenShift Container Platform, the host routing table is bypassed. In this release, you can now specify whether you want to use or bypass the host kernel networking stack for traffic routing decisions. ( BZ#1996108 ) Previously, when Kuryr was used in a restricted installation with proxy, the Cluster Network Operator was not enforcing usage of the proxy to allow communication with the Red Hat OpenStack Platform (RHOSP) API. Consequently, cluster installation did not progress. With this update, the Cluster Network Operator can communicate with the RHOSP API through the proxy. As a result, installation now succeeds. ( BZ#1985486 ) Before this update, the SRIOV webhook blocked the creation of network policies on OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) environments. With this update, the SRIOV webhook reads and validates the RHOSP metadata and can now be used to create network policies. ( BZ#2016334 ) Previously, the MachineConfig object could not be updated because the SRIOV Operator did not pause the MachineConfig pool object. With this update, the SRIOV Operator pauses the relevant machine config pool before running any configuration requiring reboot. ( BZ#2021151 ) Previously, there were timing issues with keepalived that resulted in its termination when it should have been running. This update prevents multiple keepalived commands from being sent in a short period of time. As a result, timing issues are no longer a problem and keepalive continuously runs. ( BZ#2022050 ) Previously, when Kuryr was used in a restricted installation with proxy, the Cluster Networking Operator was not enforcing usage of the proxy to allow communication with the Red Hat OpenStack Platform (RHOSP) API. Consequently, cluster installation did not progress. With this update, the Cluster Network Operator can communicate with the RHOSP API through the proxy. As a result, installation now succeeds. ( BZ#1985486 ) Previously, pods that used secondary interfaces with IP addresses provided by the Whereabouts Container Network Interface (CNI) plugin might get stuck in the ContainerCreating state because of IP address exhaustion. Now, Whereabouts properly accounts for released IP addresses from cluster events, such as reboots, that previously were not tracked. ( BZ#1914053 ) Previously, when using the OpenShift SDN cluster network provider, idled services used an increasing amount of CPU to un-idle services. In this release, the idling code for kube-proxy is optimized to reduce CPU utilizing for service idling. ( BZ#1966521 ) Previously, when using the OVN-Kubernetes cluster network provider, the presence of any unknown field in an internal configuration map could cause the OVN-Kubernetes pods to fail to start during a cluster upgrade. Now the presence of unknown fields causes a warning, rather than a failure. As a result, the OVN-Kubernetes pods now successfully start during a cluster upgrade. ( BZ#1988440 ) Previously, a webhook for the SR-IOV Network Operator blocked network policies for OpenShift installations on OpenStack. Users were not able to create SR-IOV network policies. This update fixes the webhook. Users can now create SR-IOV network policies for installations on OpenStack. ( BZ#2016334 ) Previously, the CRI-O runtime engine passed pod UIDs by using the K8S_POD_UID variable. But when pods were deleted and recreated at the same time that Multus was setting up networking for the deleted pod's sandbox, this method resulted in additional metadata and unnecessary processing. In this update, Multus handles pod UIDs, and unnecessary metadata processing is avoided. ( BZ#2017882 ) Previously, in deployments of OpenShift on a single node, default settings for the SR-IOV Network Operator prevented users from making certain modifications to nodes. By default, after configuration changes are applied, affected nodes are drained and then restarted with the new configuration. This behavior does not work when there is only one node. In this update, when you install the SR-IOV Network Operator in a single-node deployment, the Operator changes its configuration so the .spec.disableDrain field is set to true . Users can now apply configuration changes successfully in single-node deployments. ( BZ#2021151 ) Client-go versions 1.20 and earlier did not have sufficient technique for retrying requests to the Kubernetes API. As a result, retries to the Kubernetes API were not sufficient. This update fixes the problem by introducing client-go 1.22. ( BZ#2052062 ) Node Previously, network, IPC, and UTS namespace resources managed by CRI-O were only freed when the Kubelet removed stopped pods. With this update, the Kubelet frees these resources when the pods are stopped. ( BZ#2003193 ) Previously, when logging into a worker node, messages might appear indicating a systemd-coredump services failure. This was due to the unnecessary inclusion of the system-systemd namespace for containers. A filter now prevents this namespace from impacting the workflow. ( BZ#1978528 ) Previously, when clusters were restarted, the status of terminated pods might have been reset to Running , which would result in an error. This has been corrected and now all terminated pods remain terminated and active pods reflect their correct status. ( BZ#1997478 ) Previously, certain stop signals were ignored in OpenShift Container Platform, causing services in the container to continue running. With an update to the signal parsing library, all stop signals are now respected. ( BZ#2000877 ) Previously, pod namespaces managed by CRI-O, for example network, IPC, and UTS, were not unmounted when the pod was removed. This resulted in leakage, driving the Open vSwitch CPU to 100%, which caused pod latency and other performance impacts. This has been resolved and pod namespaces are unmounted when removed. ( BZ#2003193 ) OpenShift CLI (oc) Previously, due to the increasing number of custom resource definitions (CRD) installed in the cluster, the requests reaching for API discovery were limited by client code restrictions. Now, both the limit number and QPS have been boosted, and client-side throttling should appear less frequently. ( BZ#2042059 ) Previously, some minor requests did not have the user agent string set correctly, so the default Go user agent string was used instead for oc . The user agent string is now set correctly for all mirror requests, and the expected oc user agent string is now sent to registries. ( BZ#1987257 ) Previously, oc debug assumed that it was always targeting Linux-based containers by trying to run a Bash shell, and if Bash was not present in the container, it attempted to debug as a Windows container. The oc debug command now uses pod selectors to determine the operating system of the containers and now works properly on both Linux and Windows-based containers. ( BZ#1990014 ) Previously, the --dry-run flag was not working properly for several oc set subcommands, so --dry-run=server was performing updates to resources rather than performing a dry run. The --dry-run flags are now working properly to perform dry runs on the oc set subcommands. ( BZ#2035393 ) OpenShift containers Previously, a container using SELinux could not read /var/log/containers log files due to a missing policy. This update makes all log files in /var/log accessible including those accessed through symlink. ( BZ#2005997 ) OpenShift Controller Manager Previously, the openshift_apps_deploymentconfigs_last_failed_rollout_time metric improperly set the namespace label as the value of the exported_namespace label. The openshift_apps_deploymentconfigs_last_failed_rollout_time metric now has the correct namespace label set. ( BZ#2012770 ) Operator Lifecycle Manager (OLM) Before this update, default catalog sources for the marketplace-operator did not tolerate tainted nodes and the CatalogSource pod would only have the default settings for tolerations, nodeSelector , and priorityClassName . With this update, the CatalogSource specification now includes the optional spec.grpcPodConfig field that can override tolerations, nodeSelector , and priorityClassName for the pod. ( BZ#1927478 ) Before this update, the csv_suceeded metric would be lost when the OLM Operator was restarted. With this update, the csv_succeeded metric is emitted at the beginning of the OLM Operator's startup logic. ( BZ#1927478 ) Before this update, the oc adm catalog mirror command did not set minimum and maximum values for the --max-icsp-size flag. As a result, the field accepted values that were less than zero or were too large. With this update, values are limited to sizes greater than zero and less than 250001. Values outside of this range fail validation. ( BZ#1972962 ) Before this update, bundled images did not contain the related images needed for Operator deployment in file-based catalogs. As a result, images were not mirrored to disconnected clusters unless specified in the relatedImages field of the ClusterServiceVersion (CSV). With this update, the opm render command adds the CSV Operator images to the relatedImages file when the file-based catalog bundle image is rendered. The images necessary for Operator deployment are now mirrored to disconnected clusters even if they are not listed in the relatedImages field of the CSV. ( BZ#2002075 ) Before this update, it could take up to 15 minutes for Operators to perform skipRange updates. This was a known issue that could be resolved if cluster administrators deleted the catalog-operator pod in the openshift-operator-lifecycle-manager namespace. This caused the pod to be automatically recreated and triggered the skipRange upgrade. With this update, obsolete API calls have been fixed in Operator Lifecycle Manager (OLM), and skipRange updates trigger immediately. ( BZ#2002276 ) Occasionally, update events on clusters would happen at the same time that Operator Lifecycle Manager (OLM) modified an object from the lister cache. This caused concurrent map writes. This fix updates OLM so it no longer modifies objects retrieved from the lister cache. Instead, OLM modifies a copy of the object where applicable. As a result, OLM no longer experiences concurrent map writes. ( BZ#2003164 ) Previously, Operator Lifecycle Manager (OLM) could not establish gRPC connections to catalog source pods that were only reachable through a proxy. If a catalog source pod was behind a proxy, OLM could not connect to the proxy and the hosted Operator content was unavailable for installation. This bug fix introduces a GRPC_PROXY environment variable that defines a proxy that OLM uses to establish connections to gRPC catalog sources. As a result, OLM can now be configured to use a proxy when connecting to gRPC catalog sources. ( BZ#2011927 ) Previously, skipped bundles were not verified to be members of the same package. Bundles could skip across packages, which broke upgrade chains. This bug fix adds validation to ensure skipped bundles are in the same package. As a result, no bundle can skip bundles in another package, and upgrade graphs no longer break across packages. ( BZ#2017327 ) In the CatalogSource object, the RegistryServiceStatus field stores service information that is used to generate an address that Operator Lifecycle Manager (OLM) relies on to establish a connection with the associated pod. If the RegistryServiceStatus field is not nil and is missing the namespace, name, and port information for its service, OLM is unable to recover until the associated pod has an invalid image or spec. With this bug fix, when reconciling a catalog source, OLM now ensures that the RegistryServiceStatus field of the CatalogSource object is valid and updates its status to reflect the change. Additionally, this address is stored within the status of the catalog source in the status.GRPCConnectionState.Address field. If the address changes, OLM updates this field to reflect the new address. As a result, the .status.connectionState.address field of a catalog source should no longer be nil. ( BZ#2026343 ) OpenShift API server OpenShift Update Service Red Hat Enterprise Linux CoreOS (RHCOS) Previously, when the RHCOS live ISO added a UEFI boot entry for itself, it assumed the existing UEFI boot entry IDs were consecutive, thereby causing the live ISO to fail in the UEFI firmware when booting on systems with non-consecutive boot entry IDS. With this fix, the RHCOS live ISO no longer adds a UEFI boot entry for itself and the ISO boots successfully. ( BZ#2006690 ) To help you determine whether a user-provided image was already booted, information has been added on the terminal console describing when the machine was provisioned through Ignition and whether a user Ignition configuration was provided. This allows you to verify that Ignition ran when you expected it to. ( BZ#2016004 ) Previously, when reusing an existing statically keyed LUKS volume during provisioning, the encryption key was not correctly written to disk and Ignition would fail with a "missing persisted keyfile" error. With this fix, Ignition now correctly writes keys for reused LUKS volumes so that existing statically keyed LUKS volumes can be reused during provisioning. ( BZ#2043296 ) Previously, ostree-finalize-staged.service failed while upgrading a Red Hat Enterprise Linux CoreOS (RHCOS) node to 4.6.17. To fix this, the sysroot code now ignores any irregular or non-symlink files in /etc . ( BZ#1945274 ) Previously, initramfs files were missing udev rules for by-id symlinks of attached SCSI devices. Because of this, Ignition configuration files that referenced these symlinks would result in a failed boot of the installed system. With this update, the 63-scsi-sg3_symlink.rules for SCSI rules are added in dracut. ( BZ#1990506 ) Previously, on bare-metal machines, a race condition occurred between system-rfkill.service and ostree-remount.service . Consequently, the ostree-remount service failed and the node operating system froze during the boot process. With this update, the /sysroot/ directory is now read-only. As a result, the issue no longer occurs. ( BZ#1992618 ) Previously, Red Hat Enterprise Linux CoreOS (RHCOS) live ISO boots added a UEFI boot entry, prompting a reboot on systems with a TPM. With this update, the RHCOS live ISO no longer adds a UEFI boot entry so the ISO does not initiate a reboot after first boot. ( BZ#2004449 ) Performance Addon Operator The spec.cpu.reserved` flag might not be correctly set by default if spec.cpu.isolated is the only parameter defined in PerformanceProfile . You must set the settings for both spec.cpu.reserved and spec.cpu.isolated in the PerformanceProfile . The sets must not overlap and the sum of all CPUs mentioned must cover all CPUs expected by the workers in the target pool. ( BZ#1986681 ) Previously, the oc adm must-gather tool failed to collect node data if the gather-sysinfo binary was missing in the image. This was caused by a missing COPY statement in the Dockerfile. To avoid this issue, you must add the necessary COPY statements to the Dockerfile to generate and copy the binaries. ( BZ#2021036 ) Previously, the Performance Addon Operator downloaded its image from the registry without checking whether it was available on the CRI-O cache. Consequently, the Performance Addon Operator failed to start if it could not reach the registry, or if the download timed out. With this update, the Operator only downloads its image from the registry if it cannot pull the image from the CRI-O cache. ( BZ#2021202 ) When upgrading OpenShift Container Platform to version 4.10, any comment ( #comment ) in the tuned profile that does not start at the beginning of the line causes a parsing error. Performance Addon Operator issues can be solved by upgrading it to the same level (4.10) as OpenShift Container Platform. Comment-related errors can be worked around by putting all comments on a single line, with the # character at the start of the line. ( BZ#2059934 ) Routing Previously, if the cluster administrator provided a default ingress certificate that was missing the newline character for the last line, the OpenShift Container Platform router would write out a corrupt PEM file for HAProxy. Now, it writes out a valid PEM file even if the input is missing a newline character. ( BZ#1894431 ) Previously, a route created where the combined name and namespace for the DNS segment was greater than 63 characters long would be rejected. This expected behavior could cause problems integrating with upgraded versions of OpenShift Container Platform. Now, an annotation allows non-conformant DNS hostnames. With AllowNonDNSCompliantHostAnnotation set to true , the non-conformant DNS hostname, or one longer than 63 characters, is allowed. ( BZ#1964112 ) Previously, the Cluster Ingress Operator would not create wildcard DNS records for Ingress Controllers when the cluster's ControlPlaneTopology was set to External . In Hypershift clusters where the ControlPlaneTopology was set to External and the Platform was AWS, the Cluster Ingress Operator never became available. This updates limits the disabling of DNS updates when the ControlPlaneTopology is External and the platform is IBM Cloud. As a result, wildcard DNS entries are created for Hypershift clusters running on AWS. ( BZ#2011972 ) Previously, the cluster ingress router was blocked from working because the Ingress Operator failed to configure a wildcard DNS record for the cluster ingress router on Azure Stack Hub IPI. With this fix, the Ingress Operator now uses the configured ARM endpoint to configure DNS on Azure Stack Hub IPI. As a result, the cluster ingress router now works properly. ( BZ#2032566 ) Previously, the cluster-wide prox configuration could not accept IPv6 addresses for the noProxy setting. Consequently, it was impossible to install a cluster whose configuration was having noProxy with IPv6 addresses. With this update, the Cluster Network Operator is now able to parse IPv6 addresses for the noProxy setting of the cluster-wide proxy resource. As a result, it is now possible to exclude IPv6 addresses for the noProxy setting. ( BZ#1939435 ) Before OpenShift Container Platform 4.8, the IngressController API did not have any subfields under the status.endpointPublishingStrategy.hostNetwork and status.endpointPublishingStrategy.nodePort fields. These fields could be null even if the spec.endpointPublishingStrategy.type was set to HostNetwork or NodePortService . In OpenShift Container Platform 4.8, the status.endpointPublishingStrategy.hostNetwork.protocol and status.endpointPublishingStrategy.nodePort.protocol subfields were added, and the Ingress Operator set default values for these subfields when the Operator admitted or re-admitted an IngressController that specified the "HostNetwork" or "NodePortService" strategy type. With this bug, however, the Operator ignored updates to these spec fields, and updating spec.endpointPublishingStrategy.hostNetwork.protocol or spec.endpointPublishingStrategy.nodePort.protocol to PROXY to enable proxy protocol on an existing IngressController had no effect. To work around this issue, it was necessary to delete and recreate the IngressController to enable PROXY protocol. With this update, the Ingress Operator is changed so that it correctly updates the status fields when status.endpointPublishingStrategy.hostNetwork and status.endpointPublishingStrategy.nodePort are null and when the IngressController spec fields specify proxy protocol with the HostNetwork or NodePortService endpoint publishing strategy type. As a result, setting spec.endpointPublishingStrategy.hostNetwork.protocol or spec.endpointPublishingStrategy.nodePort.protocol to PROXY now takes proper effect on upgraded clusters. ( BZ#1997226 ) Samples Before this update, if the Cluster Samples Operator encountered an APIServerConflictError error, it reported sample-operator as having Degraded status until it recovered. Momentary errors of this type were not unusual during upgrades but caused undue concern for administrators monitoring the Operator status. With this update, if the Operator encounters a momentary error, it no longer indicates openshift-samples as having Degraded status and tries again to connect to the API server. Momentary shifts to Degraded status no longer occur. ( BZ#1993840 ) Before this update, various allowed and blocked registry configuration options in the cluster image configuration might prevent the Cluster Samples Operator from creating image streams. As a result, the samples operator might mark itself as degraded, which impacted the general OpenShift Container Platform install and upgrade status. In various circumstances, the management state of the Cluster Samples Operator can make the transition to Removed . With this update, these circumstances now include when the image controller configuration parameters prevent the creation of image streams by using either the default image registry or the image registry specified by the samplesRegistry setting. The Operator status now also indicates that the cluster image configuration is preventing the creation of the sample image streams. ( BZ#2002368 ) Storage Previously, the Local Storage Operator (LSO) took a long time to delete orphaned persistent volumes (PVs) due to the accumulation of a 10-second delay. With this update, the LSO does not use the 10-second delay, PVs are deleted promptly, and local disks are made available for new persistent volume claims sooner. ( BZ#2001605 ) Previously, Manila error handling would degrade the Manila Operator, and the cluster. Errors are now treated as non-fatal so that the Manila Operator is disabled, rather than degrading the cluster. ( BZ#2001620 ) In slower cloud environments, such as when using Cinder, the cluster might become degraded. Now, OpenShift Container Platform accommodates slower environments so that the cluster does not become degraded. ( BZ#2027685 ) Telco Edge If a generated policy has a complianceType of mustonlyhave , Operator Lifecycle Manager (OLM) updates to metadata are then reverted as the policy engine restores the 'expected' state of the CR. Consequently, OLM and the policy engine continuously overwrite the metadata of the CR under conflict. This results in high CPU usage. With this update, OLM and the policy engine no longer conflict, which reduces CPU usage. ( BZ#2009233 ) Previously, user-supplied fields in the PolicyGenTemplate overlay were not copied to generated manifests if the field did not exist in the base source CR. As a result, some user content was lost. The policyGen tool is now updated to support all user supplied fields. ( BZ#2028881 ) Previously, DNS lookup failures might cause the Cluster Baremetal Operator to continually fail when installed on unsupported platforms. With this update, the Operator remains disabled when installed on an unsupported platform. ( BZ#2025458 ) Web console (Administrator perspective) Web console (Developer perspective) Before this update, resources in the Developer perspective of the web console had invalid links to details about that resource. This update resolves the issue. It creates valid links so that users can access resource details. ( BZ#2000651 ) Before this update, you could only specify a subject in the SinkBinding form by label, not by name. With this update, you can use a drop-down list to select whether to specify a subject by name or label. ( BZ#2002266 ) Before this update, the web terminal icon was available in the web console's banner head only if you installed the Web Terminal Operator in the openShift-operators namespace. With this update, the terminal icon is available regardless of the namespace where you install the Web Terminal Operator. ( 2006329 ) Before this update, the service binding connector did not appear in topology if you used a resource property rather than a kind property to define a ServiceBinding custom resource (CR). This update resolves the issue by reading the CR's resource property to display the connector on the topology. ( BZ#2013545 ) Before this update, the name input fields used a complex and recursive regular expression to validate user inputs. This regular expression made name detection very slow and often caused errors. This update resolves the issue by optimizing the regular expression and avoiding recursive matching. Now, name detection is fast and does not cause errors. ( BZ#2014497 ) Before this update, feature flag gating was missing from some extensions contributed by the knative plugin. Although this issue did not affect what was displayed, these extensions ran unnecessarily, even if the serverless operator was not installed. This update resolves the issue by adding feature flag gating to the extensions where it was missing. Now, the extensions do not run unnecessarily. ( BZ#2016438 ) Before this update, if you repeatedly clicked links to get details for resources such as custom resource definitions or pods and the application encountered multiple code reference errors, it failed and displayed a t is not a function error. This update resolves the issue. When an error occurs, the application resolves a code reference and stores the resolution state so that it can correctly handle additional errors. The application no longer fails when code reference errors occur. ( BZ#2017130 ) Before this update, users with restricted access could not access their config map in a shared namespace to save their user settings on a cluster and load them in another browser or machine. As a result, user preferences such as pinned navigation items were only saved in the local browser storage and not shared between multiple browsers. This update resolves the issue: The web console Operator automatically creates RBAC rules so that each user can save these settings to a config map in a shared namespace and more easily switch between browsers. ( BZ#2018234 ) Before this update, trying to create connections between virtual machines (VMs) in the Topology view failed with an "Error creating connection" message. This issue happened because this action relied on a method that did not support custom resource definition (CRDs). This update resolves the issue by adding support for CRDs. Now you can create connections between VMs. ( BZ#2020904 ) Before this update, the tooltip for tasks in the PipelineRun details showed misleading information. It showed the time elapsed since the task ran, not how long they ran. For example, it showed 122 hours for a task that ran for a couple of seconds 5 days ago. With this update, the tooltip shows the duration of the task. ( BZ#2011368 ) 1.7. Technology Preview features Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features: Technology Preview Features Support Scope In the table below, features are marked with the following statuses: TP : Technology Preview GA : General Availability - : Not Available DEP : Deprecated Table 1.2. Technology Preview tracker Feature OCP 4.8 OCP 4.9 OCP 4.10 Precision Time Protocol (PTP) hardware configured as ordinary clock TP GA GA PTP single NIC hardware configured as boundary clock - - TP PTP events with ordinary clock - TP GA oc CLI plugins GA GA GA CSI Volumes in OpenShift Builds - - TP Service Binding TP TP GA Raw Block with Cinder GA GA GA CSI volume expansion TP TP TP CSI AliCloud Disk Driver Operator - - GA CSI Azure Disk Driver Operator TP TP GA CSI Azure File Driver Operator - - TP CSI Azure Stack Hub Driver Operator - GA GA CSI GCP PD Driver Operator GA GA GA CSI IBM VPC Block Driver Operator - - GA CSI OpenStack Cinder Driver Operator GA GA GA CSI AWS EBS Driver Operator TP GA GA CSI AWS EFS Driver Operator - TP GA CSI automatic migration TP TP TP CSI inline ephemeral volumes TP TP TP CSI vSphere Driver Operator TP TP GA Shared Resource CSI Driver - - TP Automatic device discovery and provisioning with Local Storage Operator TP TP TP OpenShift Pipelines GA GA GA OpenShift GitOps GA GA GA OpenShift sandboxed containers TP TP GA Vertical Pod Autoscaler GA GA GA Cron jobs GA GA GA PodDisruptionBudget GA GA GA Adding kernel modules to nodes with kvc TP TP TP Egress router CNI plugin GA GA GA Scheduler profiles TP GA GA Non-preempting priority classes TP TP TP Kubernetes NMState Operator TP TP GA Assisted Installer TP TP GA AWS Security Token Service (STS) GA GA GA Kdump TP TP TP OpenShift Serverless GA GA GA OpenShift on ARM platforms - - GA Serverless functions TP TP TP Data Plane Development Kit (DPDK) support TP GA GA Memory Manager - GA GA CNI VRF plugin TP GA GA Cluster Cloud Controller Manager Operator - GA GA Cloud controller manager for Alibaba Cloud - - TP Cloud controller manager for Amazon Web Services - TP TP Cloud controller manager for Google Cloud Platform - - TP Cloud controller manager for IBM Cloud - - TP Cloud controller manager for Microsoft Azure - TP TP Cloud controller manager for Microsoft Azure Stack Hub - GA GA Cloud controller manager for Red Hat OpenStack Platform (RHOSP) - TP TP Cloud controller manager for VMware vSphere - - TP Driver Toolkit TP TP TP Special Resource Operator (SRO) - TP TP Simple Content Access - TP GA Node Health Check Operator - TP TP Network bound disk encryption (Requires Clevis, Tang) - GA GA MetalLB Operator - GA GA CPU manager GA GA GA Pod-level bonding for secondary networks - - GA IPv6 dual stack - GA GA Selectable Cluster Inventory - - TP Hyperthreading-aware CPU manager policy - - TP Dynamic Plugins - - TP Hybrid Helm Operator - - TP Alert routing for user-defined projects monitoring - - TP Disconnected mirroring with the oc-mirror CLI plugin - - TP Mount shared entitlements in BuildConfigs in RHEL - - TP Support for RHOSP DCN - - TP Support for external cloud providers for clusters on RHOSP - - TP OVS hardware offloading for clusters on RHOSP - - TP External DNS Operator - - TP Web Terminal Operator TP TP GA Topology Aware Lifecycle Manager - - TP NUMA-aware scheduling with NUMA Resources Operator - - TP 1.8. Known issues In OpenShift Container Platform 4.1, anonymous users could access discovery endpoints. Later releases revoked this access to reduce the possible attack surface for security exploits because some discovery endpoints are forwarded to aggregated API servers. However, unauthenticated access is preserved in upgraded clusters so that existing use cases are not broken. If you are a cluster administrator for a cluster that has been upgraded from OpenShift Container Platform 4.1 to 4.10, you can either revoke or continue to allow unauthenticated access. It is recommended to revoke unauthenticated access unless there is a specific need for it. If you do continue to allow unauthenticated access, be aware of the increased risks. Warning If you have applications that rely on unauthenticated access, they might receive HTTP 403 errors if you revoke unauthenticated access. Use the following script to revoke unauthenticated access to discovery endpoints: ## Snippet to remove unauthenticated group from all the cluster role bindings USD for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ; do ### Find the index of unauthenticated group in list of subjects index=USD(oc get clusterrolebinding USD{clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name=="system:unauthenticated") | index(true)'); ### Remove the element at index from subjects array oc patch clusterrolebinding USD{clusterrolebinding} --type=json --patch "[{'op': 'remove','path': '/subjects/USDindex'}]"; done This script removes unauthenticated subjects from the following cluster role bindings: cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ( BZ#1821771 ) The oc annotate command does not work for LDAP group names that contain an equal sign ( = ), because the command uses the equal sign as a delimiter between the annotation name and value. As a workaround, use oc patch or oc edit to add the annotation. ( BZ#1917280 ) Currently, containers start with non-empty inheritable Linux process capabilities. To work around this issue, modify the entry point of a container using a utility such as capsh(1) to drop inheritable capabilities before the primary process starts. ( BZ#2076265 ) When upgrading to OpenShift Container Platform 4.10, the Cluster Version Operator blocks the upgrade for approximately five minutes while failing precondition checks. The error text, which says It may not be safe to apply this update , might be misleading. This error occurs when one or multiple precondition checks fail. In some situations, these precondition checks might only fail for a short period of time, for example, during an etcd backup. In these situations, the Cluster Version Operator and corresponding Operators will, by design, automatically resolve the failing precondition checks and the CVO successfully starts the upgrade. Users should check the status and conditions of their Cluster Operators. If the It may not be safe to apply this update error is displayed by the Cluster Version Operator, these statuses and conditions will provide more information about the severity of the message. For more information, see BZ#1999777 , BZ#2061444 , BZ#2006611 . The assignment of egress IP addresses to control plane nodes with the egress IP feature is not supported on a cluster provisioned on Amazon Web Services (AWS). ( BZ#2039656 ) Previously, there was a race condition between Red Hat OpenStack Platform (RHOSP) credentials secret creation and kube-controller-manager startup. As a result, Red Hat OpenStack Platform (RHOSP) cloud provider would not be configured with RHOSP credentials and would break support when creating Octavia load balancers for LoadBalancer services. To work around this, you must restart the kube-controller-manager pods by deleting the pods manually from the manifests. When you use the workaround, the kube-controller-manager pods restart and RHOSP credentials are properly configured. ( BZ#2004542 ) The ability to delete operands from the web console using the delete all operands option is currently disabled. It will be re-enabled in a future version of OpenShift Container Platform. For more information, see BZ#2012120 and BZ#2012971 . This release contains a known issue with Jenkins. If you customize the hostname and certificate of the OpenShift OAuth route, Jenkins no longer trusts the OAuth server endpoint. As a result, users cannot log in to the Jenkins console if they rely on the OpenShift OAuth integration to manage identity and access. Workaround: See the Red Hat Knowledge base solution, Deploy Jenkins on OpenShift with Custom OAuth Server URL . ( BZ#1991448 ) This release contains a known issue with Jenkins. The xmlstarlet command line toolkit, which is required to validate or query XML files, is missing from this RHEL-based image. This issue impacts deployments that do not use OpenShift OAuth for authentication. Although OpenShift OAuth is enabled by default, users can disable it. Workaround: Use OpenShift OAuth for authentication. ( BZ#2055653 ) Google Cloud Platform (GCP) UPI installation fails when the instance group name is longer than the maximum size of 64 characters. You are restricted in the naming process after adding the "-instance-group" suffix. Shorten the suffix to "-ig" to reduce the number of characters. ( BZ#1921627 ) For clusters that run on RHOSP and use Kuryr, a bug in the OVN Provider driver for Octavia can cause load balancer listeners to be stuck in a PENDING_UPDATE state while the load balancer that they are attached to remains in an ACTIVE state. As a result, the kuryr-controller pod can crash. To resolve this problem, update RHOSP to version 16.1.9 ( BZ#2019980 ) or version 16.2.4 ( BZ#2045088 ). If an incorrect network is specified in the vSphere install-config.yaml file, then an error message from Terraform is generated after a while. Add a check during the creation of manifests to notify the user if the network is invalid. ( BZ#1956776 ) The Special Resource Operator (SRO) might fail to install on Google Cloud Platform due to a software-defined network policy. As a result, the simple-kmod pod is not created. ( BZ#1996916 ) Currently, idling a stateful set is unsupported when you run oc idle for a service that is mapped to a stateful set. There is no known workaround at this time. ( BZ#1976894 ) The China (Nanjing) and UAE (Dubai) regions of Alibaba Cloud International Portal accounts do not support installer-provisioned infrastructure (IPI) installations. The China (Guangzhou) and China (Ulanqab) regions do not support a Server Load Balancer (SLB) if using Alibaba Cloud International Portal accounts and, therefore, also do not support IPI installations. ( BZ#2048062 ) The Korea (Seoul) ap-northeast-2 region of Alibaba Cloud does not support installer-provisioned infrastructure (IPI) installations. The Korea (Seoul) region does not support a Server Load Balancer (SLB) and, therefore, also does not support IPI installations. If you want to use OpenShift Container Platform in this region, contact Alibaba Cloud . ( BZ#2062525 ) Currently, the Knative Serving - Revision CPU, Memory, and Network usage and Knative Serving - Revision Queue proxy Metrics dashboards are visible to all the namespaces, including those that do not have Knative services. ( BZ#2056682 ) Currently, in the Developer perspective, the Observe dashboard opens for the most recently viewed workload rather than the one you selected in the Topology view. This issue happens because the session uses the Redux store rather than the query parameters in the URL. ( BZ#2052953 ) Currently, the ProjectHelmChartRepository custom resource (CR) does not show up in the cluster because the API schema for this CR has not been initialized in the cluster yet. ( BZ#2054197 ) Currently, while running high-volume pipeline logs, the auto-scroll functionality does not work and logs are stuck showing older messages. This issue happens because running high-volume pipeline logs generates a large number of calls to the scrollIntoView method. ( BZ#2014161 ) Currently, when you use the Import from Git form to import a private Git repository, the correct import type and a builder image are not identified. This issue happens because the secret to fetch the private repository details is not decoded. ( BZ#2053501 ) During an upgrade of the monitoring stack, Prometheus and Alertmanager might become briefly unavailable. No workaround for this issue is necessary because the components will be available after a short time has passed. No user intervention is required. ( BZ#203059 ) For this release, monitoring stack components have been updated to use TLS authentication for metrics collection. However, sometimes Prometheus tries to keep HTTP connections to metrics targets open using expired TLS credentials even after new ones have been provided. Authentication errors then occur, and some metrics targets become unavailable. When this issue occurs, a TargetDown alert will fire. To work around this issue, restart the pods that are reported as down. ( BZ#2033575 ) For this release, the number of Alertmanager replicas in the monitoring stack was reduced from three to two. However, the persistent volume claim (PVC) for the removed third replica is not automatically removed as part of the upgrade process. After the upgrade, an administrator can remove this PVC manually from the Cluster Monitoring Operator. ( BZ#2040131 ) Previously, the oc adm must-gather tool did not collect performance specific data when more than one --image argument was supplied. Files, including node and performance related files, were missing when the operation finished. The issue affects OpenShift Container Platform versions between 4.7 and 4.10. This issue can be resolved by executing the oc adm must-gather operation twice, once for each image. As a result, all expected files can be collected. ( BZ#2018159 ) When using the Technology Preview oc-mirror CLI plugin, there is a known issue that can occur when updating your cluster after mirroring an updated image set to the mirror registry. If a new version of an Operator is published to a channel by deleting the version of that Operator and then replacing it with a new version, an error can occur when applying the generated CatalogSource file from the oc-mirror plugin, because the catalog is seen as invalid. As a workaround, delete the catalog image from the mirror registry, generate and publish a new differential image set, and then apply the CatalogSource file to the cluster. You must follow this workaround each time you publish a new differential image set, until this issue is resolved. ( BZ#2060837 ) The processing of the StoragePVC custom resource during the GitOps ZTP flow does not exclude the volume.beta.kubernetes.io/storage-class annotation when a user does not include a value for it. This annotation causes the spec.storageClassName field to be ignored. To avoid this, set the desired StorageClass name in the volume.beta.kubernetes.io/storage-class annotation within your PolicyGenTemplate when using a StoragePVC custom resource. ( BZ#2060554 ) Removing a Bidirectional Forwarding Detection (BFD) custom profile enabled on a border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. ( BZ#2050824 ) For clusters that run on RHOSP and use Mellanox NICs as part of a single-root I/O virtualization configuration (SR-IOV), you may not be able to create a pod after you start one, restart the SR-IOV device plugin, and then stop the pod. No workaround is available for this issue. OpenShift Container Platform supports deploying an installer-provisioned cluster without a DHCP server. However, without a DHCP server, the bootstrap VM does not receive an external IP address for the baremetal network. To assign an IP address to the bootstrap VM, see Assigning a bootstrap VM an IP address on the baremetal network without a DHCP server . ( BZ#2048600 ) OpenShift Container Platform supports deploying an installer-provisioned cluster with static IP addresses on the baremetal network for environments without a DHCP server. If a DHCP server is present, nodes might retrieve an IP address from the DHCP server on reboot. To prevent DHCP from assigning an IP address to a node on reboot, see Preventing DHCP from assigning an IP address on node reboot . ( BZ#2036677 ) The RHCOS kernel experiences a soft lockup and eventually panics due to a bug in the Netfilter module. A fix is planned to resolve this issue in a future z-stream release of OpenShift Container Platform. ( BZ#2061445 ) Due to the inclusion of old images in some image indexes, running oc adm catalog mirror and oc image mirror might result in the following error: error: unable to retrieve source image . As a temporary workaround, you can use the --skip-missing option to bypass the error and continue downloading the image index. For more information, see Service Mesh Operator mirroring failed . It is not possible to create a macvlan on the physical function (PF) when a virtual function (VF) already exists. This issue affects the Intel E810 NIC. ( BZ#2120585 ) If a cluster that was deployed through ZTP has policies that do not become compliant, and no ClusterGroupUpdates object is present, you must restart the TALM pods. Restarting TALM creates the proper ClusterGroupUpdates object, which enforces the policy compliance. ( OCPBUGS-4065 ) Currently, when using a persistent volume (PV) that contains a very large number of files, the pod might not start or can take an excessive amount of time to start. For more information, see this knowledge base article . ( BZ1987112 ) 1.9. Asynchronous errata updates Security, bug fix, and enhancement updates for OpenShift Container Platform 4.10 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.10 errata are available on the Red Hat Customer Portal . See the OpenShift Container Platform Life Cycle for more information about asynchronous errata. Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. Note Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate. This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.10. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.10.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. Important For any OpenShift Container Platform release, always review the instructions on updating your cluster properly. 1.9.1. RHSA-2022:0056 - OpenShift Container Platform 4.10.3 image release, bug fix, and security update advisory Issued: 2022-03-10 OpenShift Container Platform release 4.10.3, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the RHSA-2022:0056 advisory. A secondary set of bug fixes can be found in the RHEA-2022:0748 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:0055 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.3 --pullspecs 1.9.1.1. Bug fixes Previously, OpenShift Container Platform, with OVN-Kubernetes, managed ingress access to services via ExternalIP. When upgrading from 4.10.2 to 4.10.3, access ExternalIP stops work with issues like "No Route to Host". With this update, administrators will now have to direct traffic from externalIPs to the cluster. For guidance, see ( KCS* ) and ( Kubernetes External IPs ) ( BZ#2076662 ) 1.9.2. RHBA-2022:0811 - OpenShift Container Platform 4.10.4 bug fix and security update Issued: 2022-03-15 OpenShift Container Platform release 4.10.4, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:0811 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:0810 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.4 --pullspecs 1.9.2.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.3. RHBA-2022:0928 - OpenShift Container Platform 4.10.5 bug fix and security update Issued: 2022-03-21 OpenShift Container Platform release 4.10.5, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:0928 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:0927 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.5 --pullspecs 1.9.3.1. Known issues There is an issue adding a cluster through zero touch provisioning (ZTP) with the name ztp* . Adding ztp as the name of a cluster causes a situation where ArgoCD deletes policies that ACM copies in to the cluster namespace. Naming the cluster with ztp leads to a reconciliation loop, and the policies will not be compliant. As a workaround, do not name clusters with ztp at the beginning of the name. By renaming the cluster, collision will stop the reconciliation loop and the policies will be compliant. ( BZ#2049154 ) 1.9.3.2. Bug fixes Previously, the Observer dashboard from the Topology view in the Developer console opened to the last viewed workload rather than the selected one. With this update, the Observe dashboard in the Developer console always opens to the selected workload from the Topology view. ( BZ#2059805 ) 1.9.3.3. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.4. RHBA-2022:1026 - OpenShift Container Platform 4.10.6 bug fix and security update Issued: 2022-03-28 OpenShift Container Platform release 4.10.6, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:1026 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:1025 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.6 --pullspecs 1.9.4.1. Features 1.9.4.2. Updates from Kubernetes 1.23.5 This update contains changes from Kubernetes 1.23.3 up to 1.23.5. More information can be found in the following changelogs: 1.23.4 and 1.23.5 1.9.4.3. Bug fixes Previously, the query for subnets in Cisco ACI's neutron implementation, which is avaiable in Red Hat OpenStack Platform (RHOSP)-16, returned unexpected results for a given network. Consequently, the RHOSP cluster-api-provider could potentially try to provision instances with duplicated ports on the same subnet, which caused failed provision. With this update, an additional filter is added in the RHOSP cluster-api-provider to ensure there is only one port per subnet. As a result, it is now possible to deploy OpenShift Container Platform on RHOSP-16 with Cisco ACI. ( BZ#2050064 ) Previously, oc adm must gather fell back to the oc adm inspect command when the specified image could not run. Consequently, it was difficult to understand information from the logs when the fall back happened. With this update, the logs are improved to make it explicit when a fall back inspection is performed. As a result, the output of oc adm must gather is easier to understand. ( BZ#2049427 ) Previously, the oc debug node command did not have timeout specified on idle. Consequently, the users were never logged out of the cluster. With this update, a TMOUT environment variable for debug pod has been added to counter inactivity timeout. As a result, the session will be automatically terminated after TMOUT inactivity. ( BZ#2060888 ) Previously, the Ingress Operator performed health checks against the Ingress canary route. When the health check completed, the Ingress Operator did not close the TCP connection to the LoadBalancer because keepalive packets were enabled on the connection. While performing the health check, a new connection was established to the LoadBalancer instead of using the existing connection. Consequently, this caused connections to accumulate on the LoadBalancer . Over time, this exhausted the number of connections on the LoadBalancer . With this update, Keepalive is disabled when connecting to the Ingress canary route. As a result, a new connection is made and closed each time the canary probe is run. While Keepalive is disabled, there is no longer an accumulation of established connections. ( BZ#2063283 ) Previously, the sink for event sources in the Trigger/Subscription modal in the Topology UI showed all resources, irrespective of whether they were created as a standalone or an underlying resource included with back KSVC, Broker, or KameletBinding. Consequently, users could sink to the underlying addressable resources as they showed up in the sink drop-down menu. With this update, a resource filter has been added to show only standalone resource sink events. ( BZ#2059807 ) 1.9.4.4. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.5. RHSA-2022:1162 - OpenShift Container Platform 4.10.8 bug fix and security update Issued: 2022-04-07 OpenShift Container Platform release 4.10.8, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2022:1162 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:1161 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.8 --pullspecs 1.9.5.1. Removed features Starting with OpenShift Container Platform 4.10.8, support for Google Cloud Platform Workload Identity has been removed from OpenShift Container Platform 4.10 for the image registry. This change is due to the discovery of an adverse impact to the image registry . With OpenShift Container Platform 4.10.21, support for using GCP Workload Identity with the image registry is restored. For more information about the status of this feature between OpenShift Container Platform 4.10.8 and 4.10.20, see the related Knowledgebase article . 1.9.5.2. Known issues Currently, the web console does not display virtual machine templates that are deployed to a custom namespace. Only templates deployed to the default namespace are displayed in the web console. As a workaround, avoid deploying templates to a custom namespace. ( BZ#2054650 ) 1.9.5.3. Bug fixes Previously, the Infrastructure Operator could not provision X11- and X12-based systems due to validation errors created by the bare metal controller (BMC) when special characters such as question marks or equal signs were used in the filename parameters of URLs. With this update, the filename parameter is removed from the URL if the virtual media image is backed by a local file. ( BZ#2011626 ) Previously, when cloning a virtual machine from a template, Operator-made changes reverted after dismissing the dialog box if the boot disk was edited and the storage class was changed. With this update, changes made to storage class remain set after closing the dialogue box. ( BZ#2049762 ) Previously, the startupProbe field was added to a container's definition. As a result, startupProbe causes problems when creating a debug pod. With this update, startupProbe is removed by default from the debug pod by the Expose --keep-startup flag parameter, which is now set to false by default. ( BZ#2068474 ) Previously, the Local Storage Operator (LSO) added an OwnerReference object to the persistent volumes (PV) it created, which sometimes caused an issue where a delete request for a PV could leave the PV in the terminating state while still attached to the pod. With this update, the LSO no longer creates an OwnerReference object and cluster administrators are now able to manually delete any unused PVs after a node is removed from the cluster. ( BZ#2065714 ) Before this update, import strategy detection did not occur when a secret was provided for a private Git repository. Consequently, the secrets value was not decoded before it was used. With this update, the secret value is now decoded before use. ( BZ#2057507 ) 1.9.5.4. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.6. RHBA-2022:1241 - OpenShift Container Platform 4.10.9 bug fix update Issued: 2022-04-12 OpenShift Container Platform release 4.10.9 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:1241 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:1240 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.9 --pullspecs 1.9.6.1. Known issues When updating to OpenShift Container Platform 4.10.9, the etcd pod fails to start and the etcd Operator falls into a degraded state. A future version of OpenShift Container Platform will resolve this issue. For more information, see etcd pod is failing to start after updating OpenShift Container Platform 4.9.28 or 4.10.9 and Potential etcd data inconsistency issue in OCP 4.9 and 4.10 . 1.9.6.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.7. RHSA-2022:1357 - OpenShift Container Platform 4.10.10 bug fix and security update Issued: 2022-04-20 OpenShift Container Platform release 4.10.10 is now available. The bug fixes that are included in the update are listed in the RHSA-2022:1357 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:1355 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.10 --pullspecs 1.9.7.1. Bug fixes Previously, the cluster storage Operator credentials request for Amazon Web Services (AWS) did not include KMS statements. Consequently, persistent volumes (PVs) failed to deploy due to the inability to provide a key. With this update, the default credentials request for AWS now allows the mounting of encrypted volumes using customer-managed keys from KMS. Administrators who create credentials requests in manual mode with Cloud Credential Operator (CCO) must apply those changes manually. Other administrators should not be impacted by this change. ( BZ#2072191 ) 1.9.7.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.8. RHBA-2022:1431 - OpenShift Container Platform 4.10.11 bug fix update Issued: 2022-04-25 OpenShift Container Platform release 4.10.11 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:1431 advisory. There are no RPM packages for this release. You can view the container images in this release by running the following command: USD oc adm release info 4.10.11 --pullspecs 1.9.8.1. Bug fixes Previously, when cloning a virtual machine from a template, Operator-made changes reverted after dismissing the dialog box if the boot disk was edited and the storage class was changed. With this update, changes made to storage class remain set after closing the dialogue box. ( BZ#2049762 ) 1.9.8.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.9. RHBA-2022:1601 - OpenShift Container Platform 4.10.12 bug fix and security update Issued: 2022-05-02 OpenShift Container Platform release 4.10.12, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:1601 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:1600 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.12 --pullspecs 1.9.9.1. Bug fixes Previously, the Infrastructure Operator could not provision X11- and X12-based systems. This was due to validation errors created by the bare metal controller (BMC) when special characters such as question marks or equal signs were used in the filename parameters of URLs. With this update, the filename parameter is removed from the URL if the virtual media image is backed by a local file. ( BZ#2011626 ) 1.9.9.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.10. RHBA-2022:1690 - OpenShift Container Platform 4.10.13 bug fix update Issued: 2022-05-11 OpenShift Container Platform release 4.10.13 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:1690 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:1689 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.13 --pullspecs 1.9.10.1. Bug fixes Previously, when creating an Ingress Object in OpenShift Container Platform 4.8, the API restrictions prevented users from defining routes with hostnames and installing numeric clusters. This fix removes the API's number restriction, allowing users to create clusters with numbers and define routes using hostnames. ( BZ#2072739 ) Previously, pods related to jobs would get stuck in the Terminating state in OpenShift Container Platform 4.10 due to the JobTrackingWithFinalizers feature. This fix disables the JobTrackingWithFinalizers feature, resulting in all pods to run as intended. ( BZ2075831 ) 1.9.10.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.11. RHBA-2022:2178 - OpenShift Container Platform 4.10.14 bug fix update Issued: 2022-05-18 OpenShift Container Platform release 4.10.14 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:2178 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:2177 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.14 --pullspecs 1.9.11.1. Features 1.9.11.1.1. Update the control plane independently of other worker nodes With this update, you can now perform a partial cluster update within the Update Cluster modal. You are able to update the worker or custom pool nodes to accommodate the time it takes for maintenance. You can also pause and resume within the progress bar of each pool. If one or more worker or custom pools are paused, an alert is displayed at the top of the Cluster Settings page. ( BZ#2076777 ) For more information, see Preparing to perform an EUS-to-EUS update and Updating a cluster using the web console . 1.9.11.1.2. General availability of the Web Terminal Operator With this update, the Web Terminal Operator is now generally available. 1.9.11.1.3. Support for the AWS premium_LRS and standardSSD_LRS disk types With this update, you can deploy control plane and compute nodes with the premium_LRS , standardSSD_LRS , or standard_LRS disk type. By default, the installation program deploys control plane and compute nodes with the premium_LRS disk type. In earlier 4.10 releases, only the standard_LRS disk type was supported. ( BZ#2079589 ) 1.9.11.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.12. RHBA-2022:2258 - OpenShift Container Platform 4.10.15 bug fix update Issued: 2022-05-23 OpenShift Container Platform release 4.10.15 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:2258 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:2257 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.15 --pullspecs 1.9.12.1. Bug fixes Previously, the Image Registry Operator blocked installer-provisioned infrastructure (IPI) installations on IBM Cloud. With this update, clusters that mint credentials manually will now require the administrator role. ( BZ#2083559 ) 1.9.12.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.13. RHBA-2022:4754 - OpenShift Container Platform 4.10.16 bug fix update Issued: 2022-05-31 OpenShift Container Platform release 4.10.16 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:4754 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:4753 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.16 --pullspecs 1.9.13.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.14. RHBA-2022:4882 - OpenShift Container Platform 4.10.17 bug fix update Issued: 2022-06-07 OpenShift Container Platform release 4.10.17 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:4882 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:4881 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.17 --pullspecs 1.9.14.1. Bug fixes Previously, the federation endpoint for Prometheus that stored user-defined metrics was not exposed. Therefore, you could not access it to scrape these metrics from a network location outside the cluster. With this update, you can now use the federation endpoint to scrape user-defined metrics from a network location outside the cluster. ( BZ#2090602 ) Previously, for user-defined projects, you could not change the default data retention time period value of 24 hours for the Thanos Ruler monitoring component. With this update, you can now change how long Thanos Ruler metrics data is retained for user-defined projects. ( BZ#2090422 ) 1.9.14.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.15. RHBA-2022:4944 - OpenShift Container Platform 4.10.18 bug fix and security update Issued: 2022-06-13 OpenShift Container Platform release 4.10.18, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:4944 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:4943 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.18 --pullspecs 1.9.15.1. Bug fixes Previously, Alibaba Cloud was only supported for disk volumes larger than 20 GiB. Consequently, attempts to dynamically provision a new volume for a persistent volume claim (PVC) smaller than 20GiB failed. With this update, OpenShift Container Platform will automatically increase the volume size for a PVC and it will provision volumes at least with 20 GiB in size. ( BZ#2076671 ) Previously, the Ingress Operator had unnecessary logic to remove a finalizer on LoadBalancer-type services in versions of OpenShift Container Platform. With this update, the Ingress Operator no longer includes this logic. ( BZ#2082161 ) 1.9.15.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.16. RHBA-2022:5172 - OpenShift Container Platform 4.10.20 bug fix update Issued: 2022-06-28 OpenShift Container Platform release 4.10.20 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:5172 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:5171 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.20 --pullspecs 1.9.16.1. Bug fixes Previously, Bond Container Network Interface (CNI) version 1.0 was not compatible with the Multus Container Network Interface (CNI) plugin. Consequently, the Bond-CNI IP address management (IPAM) improperly populated the network-status annotation. With this update, IPAM and Bond-CNI now supports Bond-CNI 1.0. ( BZ#2084289 ) Before this update, the Start Pipeline dialog box displayed gp2 as the storage class, regardless of the actual storage class used. With this update, the Start Pipeline dialog box displays the actual storage class name. 1.9.16.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.17. RHBA-2022:5428 - OpenShift Container Platform 4.10.21 bug fix update Issued: 2022-07-06 OpenShift Container Platform release 4.10.21 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:5428 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:5427 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.21 --pullspecs 1.9.17.1. New features The feature change in OpenShift Container Platform 4.10.8 to remove support for Google Cloud Platform (GCP) Workload Identity for the image registry has been resolved in OpenShift Container Platform 4.10.21. 1.9.17.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.18. RHBA-2022:5513 - OpenShift Container Platform 4.10.22 bug fix update Issued: 2022-07-11 OpenShift Container Platform release 4.10.22 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:5513 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:5512 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.22 --pullspecs 1.9.18.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.19. RHBA-2022:5568 - OpenShift Container Platform 4.10.23 bug fix update Issued: 2022-07-20 OpenShift Container Platform release 4.10.23 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:5568 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:5567 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.23 --pullspecs 1.9.19.1. Features 1.9.19.2. Updating managed clusters with Topology Aware Lifecycle Manager (Technology Preview) You can now use the upstream Topology Aware Lifecycle Manager to perform updates on multiple single-node Openshift clusters by using Red Hat Advanced Cluster Management (RHACM) policies. For more information, see About the Topology Aware Lifecycle Manager configuration . 1.9.19.3. Low-latency Redfish hardware event delivery (Technology Preview) OpenShift Container Platform now provides a hardware event proxy that enables applications running on bare-metal clusters to respond quickly to Redfish hardware events, such as hardware changes and failures. The hardware event proxy supports a publish-subscribe service that allows relevant applications to consume hardware events detected by Redfish. The proxy must be running on hardware that supports Redfish v1.8 and later. An Operator manages the lifecycle of the hw-event-proxy container. You can use a REST API to develop applications to consume and respond to events such as breaches of temperature thresholds, fan failure, disk loss, power outages, and memory failure. Reliable end-to-end messaging without persistent stores is based on the Advanced Message Queuing Protocol (AMQP). The latency of the messaging service is in the 10 millisecond range. Note This feature is supported for single node OpenShift clusters only. 1.9.19.4. Zero touch provisioning is generally available Use zero touch provisioning (ZTP) to provision distributed units at new edge sites in a disconnected environment. This feature was previously introduced as a Technology Preview feature in OpenShift Container Platform 4.9 and is now generally available and enabled by default in OpenShift Container Platform 4.11. For more information, see Preparing the hub cluster for ZTP . 1.9.19.5. Indication of done for ZTP A new tool is available that simplifies the process of checking for a completed zero touch provisioning (ZTP) installation using the Red Hat Advanced Cluster Management (RHACM) static validator inform policy. It provides an indication of done for ZTP installations by capturing the criteria for a completed installation and validating that it moves to a compliant state only when ZTP provisioning of the spoke cluster is complete. This policy can be used for deployments of single node clusters, three-node clusters, and standard clusters. To learn more about the validator inform policy, see Indication of done for ZTP installations . 1.9.19.6. Enhancements to ZTP For OpenShift Container Platform 4.10, there are a number of updates that make it easier to configure the hub cluster and generate source CRs. New PTP and UEFI secure boot features for spoke clusters are also available. The following is a summary of these features: You can add or modify existing source CRs in the ztp-site-generate container, rebuild it, and make it available to the hub cluster, typically from the disconnected registry associated with the hub cluster. You can configure PTP fast events for vRAN clusters that are deployed using the GitOps zero touch provisioning (ZTP) pipeline. You can configure UEFI secure boot for vRAN clusters that are deployed using the GitOps ZTP pipeline. You can use Topology Aware Lifecycle Manager to orchestrate the application of the configuration CRs to the hub cluster. 1.9.19.7. ZTP support for multicluster deployment Zero touch provisioning (ZTP) provides support for multicluster deployment, including single node clusters, three-node clusters, and standard OpenShift clusters. This includes the installation of OpenShift and deployment of the distributed units (DUs) at scale. This gives you the ability to deploy nodes with master, worker, and master and worker roles. ZTP multinode support is implemented through the use of SiteConfig and PolicyGenTemplate custom resources (CRs). The overall flow is identical to the ZTP support for single node clusters, with some differentiation in configuration depending on the type of cluster: In the SiteConfig file: Single node clusters must have exactly one entry in the nodes section. Three-node clusters must have exactly three entries defined in the nodes section. Standard OpenShift clusters must have exactly three entries in the nodes section with role: master and one or more additional entries with role: worker. The PolicyGenTemplate file tells the Policy Generator where to categorize the generated policies. Example PolicyGenTemplate files provide you with example files to simplify your deployments: The example common PolicyGenTemplate file is common across all types of clusters. Example group PolicyGenTemplate files for single node, three-node, and standard clusters are provided. Site-specific PolicyGenTemplate files specific to each site are provided. To learn more about multicluster deployment, see Deploying a managed cluster with SiteConfig and ZTP . 1.9.19.8. Support for unsecured OS images with Assisted Installer This release includes the following warning when enabling TLS for the HTTPD server using the Assisted Installer in IPI or ZTP disconnected environments. When enabling TLS for the HTTPD server in these environments, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and spoke clusters and the HTTPD server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. 1.9.19.9. Known issues The Kubelet service monitor scrape interval is currently set to a hard-coded value. This means that there is less available CPU resources for workloads. ( BZ#2035046 ) Deploying a single node OpenShift cluster for vRAN Distributed Units can take up to 4 hours. ( BZ#2035036 ) Currently, if an RHACM policy was enforced to a target cluster, you can create a ClusterGroupUpgrade CR including that enforced policy again as a managedPolicy to the same target cluster. This should not be possible. ( BZ#2044304 ) If a ClusterGroupUpgrade CR has a blockingCR specified, and that blockingCR fails silently, for example if there is a typo in the list of clusters, the ClusterGroupUpgrade CR is applied even though the blockingCR is not applied in the cluster. ( BZ#2042601 ) If a ClusterGroupUpgrade CR validation fails for any reason, for example, because of an invalid spoke name, no status is available for the ClusterGroupUpgrade CR, as if the CR is disabled. ( BZ#2040828 ) Currently, for ClusterGroupUpgrade CR state changes, there is only a single condition type available - Ready . The Ready condition can have a status of True or False only. This does not reflect the range of states the ClusterGroupUpgrade CR can be in. ( BZ#2042596 ) When deploying a multi-node cluster on bare-metal nodes, the machine config pool (MCP) adds an additional CRI-O drop-in that circumvents the container mount namespace drop-in . This results in CRI-O being in the base namespace while the kubelet is in the hidden namespace. All containers fail to get any kubelet-mounted filesystems, such as secrets and tokens. ( BZ#2028590 ) Installing RHACM 2.5.0 in the customized namespace causes the infrastructure-operator pod to fail due to insufficient privileges. ( BZ#2046554 ) OpenShift Container Platform limits object names to 63 characters. If a policy name defined in a PolicyGenTemplate CR approaches this limit, the Topology Aware Lifecycle Manager cannot create child policies. When this occurs, the parent policy remains in a NonCompliant state. ( BZ#2057209 ) In the default ZTP Argo CD configuration, cluster names cannot begin with ztp . Using names starting with ztp for clusters deployed with Zero Touch Provisioning (ZTP) results in provisioning not completing. As a workaround, ensure that either cluster names do not start with ztp , or adjust the Argo CD policy application namespace to a pattern that excludes the names of your clusters but still matches your policy namespace. For example, if your cluster names start with ztp , change the pattern in the Argo CD policy app configuration to something different, like ztp- . ( BZ#2049154 ) During a spoke cluster upgrade, one or more reconcile errors is recorded in the container log. The number of errors corresponds to the number of child policies. The error causes no noticeable impact to the cluster. The following is an example of the reconcile error: 2022-01-21T00:14:44.697Z INFO controllers.ClusterGroupUpgrade Upgrade is completed 2022-01-21T00:14:44.892Z ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {"reconciler group": "ran.openshift.io", "reconciler kind": "ClusterGroupUpgrade", "name": "timeout", "namespace": "default", "error": "Operation cannot be fulfilled on clustergroupupgrades.ran.openshift.io \"timeout\": the object has been modified; please apply your changes to the latest version and try again"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:214 ( BZ#2043301 ) During a spoke cluster upgrade from 4.9 to 4.10, with heavy workload running, the kube-apiserver pod can take longer than the expected time to start. As a result, the upgrade does not complete and the kube-apiserver rolls back to the version. ( BZ#2064024 ) If you deploy the AMQ Interconnect Operator, pods run on IPv4 nodes only. The AMQ Interconnect Operator is not supported on IPv6 nodes. ( ENTMQIC-3297 ) 1.9.19.10. Bug fixes Previously, the Ingress Operator detected changes made through the Ingress Controller and set the Upgradeable status condition of the Ingress Operator to False . The False status condition blocked upgrades. With this update, the Ingress Operator no longer blocks upgrades. ( BZ#2097735 ) 1.9.19.11. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.20. RHSA-2022:5664 - OpenShift Container Platform 4.10.24 bug fix and security update Issued: 2022-07-25 OpenShift Container Platform release 4.10.24 is now available. The bug fixes that are included in the update are listed in the RHSA-2022:5664 advisory. There are no RPM packages for this release. You can view the container images in this release by running the following command: USD oc adm release info 4.10.24 --pullspecs 1.9.20.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.21. RHSA-2022:5730 - OpenShift Container Platform 4.10.25 bug fix and security update Issued: 2022-08-01 OpenShift Container Platform release 4.10.25, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2022:5730 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:5729 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.25 --pullspecs 1.9.21.1. Bug fixes Previously, where clusters had a Security Context Constraint (SCC), the default IngressController Deployment could cause pods to fail to start. This was due to the default container name router being created without requesting sufficient permissions in the securityContext of the container. With this update, router pods will be admitted to the correct SCC and created without error. ( BZ#2079034 ) Previously, routers in the terminating state delay the oc cp command, which delayed the must-gather . With this update, a timeout for each oc cp command has been set eliminating the delay of the must-gathers . ( BZ#2106842 ) 1.9.21.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.22. RHSA-2022:5875 - OpenShift Container Platform 4.10.26 bug fix and security update Issued: 2022-08-08 OpenShift Container Platform release 4.10.26, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2022:5875 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:5874 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.26 --pullspecs 1.9.22.1. Bug fixes Previously, new regions were not recognized by the AWS SDK and the machine controller could not use them. This problem occurred because the AWS SDK only recognized regions from the time AWS SDK was vendored. With this update, administrators can use DescribeRegions to check the specified region for a machine and create new machines in regions unknown to SDK. ( BZ#2109124 ) Note This is a new AWS permission and you must update credentials for manual mode clusters with the new permission. 1.9.22.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.23. RHBA-2022:6095 - OpenShift Container Platform 4.10.28 bug fix and security update Issued: 2022-08-23 OpenShift Container Platform release 4.10.28, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:6095 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:6094 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.28 --pullspecs 1.9.23.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.24. RHSA-2022:6133 - OpenShift Container Platform 4.10.30 bug fix and security update Issued: 2022-08-31 OpenShift Container Platform release 4.10.30, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2022:6133 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:6132 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.30 --pullspecs 1.9.24.1. Features 1.9.24.1.1. General availability of pod-level bonding for secondary networks With this update, Using pod-level bonding is now generally available. 1.9.24.2. Bug fixes Previously, the functionality of Bond-CNI was limited to only active-backup mode. With this update, the bonding modes supported are: balance-rr -0 active-backup -1 balance-xor -2 ( BZ#2102047 ) 1.9.24.3. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.25. RHSA-2022:6258 - OpenShift Container Platform 4.10.31 bug fix and security update Issued: 2022-09-07 OpenShift Container Platform release 4.10.31, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2022:6258 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:6257 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.31 --pullspecs 1.9.25.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.26. RHBA-2022:6372 - OpenShift Container Platform 4.10.32 bug fix Issued: 2022-09-13 OpenShift Container Platform release 4.10.32 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:6372 advisory. There are no RPM packages for this release. You can view the container images in this release by running the following command: USD oc adm release info 4.10.32 --pullspecs 1.9.26.1. Bug fixes Previously, dual-stack clusters using the PROXY protocol only enabled it on IPv6 and not IPv4. With this update, OpenShift Container Platform now enables the PROXY protocol for both IPv6 and IPv4 on dual-stack clusters. ( BZ#2096362 ) 1.9.26.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.27. RHBA-2022:6532 - OpenShift Container Platform 4.10.33 bug fix and security update Issued: 2022-09-20 OpenShift Container Platform release 4.10.33, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:6532 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:6531 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.33 --pullspecs 1.9.27.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.28. RHBA-2022:6663 - OpenShift Container Platform 4.10.34 bug fix and security update Issued: 2022-09-27 OpenShift Container Platform release 4.10.34, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:6663 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:6661 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.34 --pullspecs 1.9.28.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.29. RHBA-2022:6728 - OpenShift Container Platform 4.10.35 bug fix update Issued: 2022-10-04 OpenShift Container Platform release 4.10.35, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:6728 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:6727 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.35 --pullspecs 1.9.29.1. Bug fixes Previously, the logic in the Ingress Operator did not validate whether a kubernetes service object in the openshift-ingress namespace was created by the Ingress Controller it was attempting to reconcile with. Consequently, the Operator would modify or remove kubernetes services with the same name and namespace regardless of ownership. With this update, the Ingress Operator now checks for the ownership of existing kubernetes services it attempts to create or remove. If the ownership does not match, the Ingress Operator provides an error. ( OCPBUGS-1623 ) 1.9.29.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.30. RHSA-2022:6805 - OpenShift Container Platform 4.10.36 bug fix update Issued: 2022-10-12 OpenShift Container Platform release 4.10.36, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2022:6805 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:6803 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.36 --pullspecs 1.9.30.1. Bug fixes Previously, the router process was ignoring the SIGTERM shutdown signal during initialization. This resulted in container shutdown times of one hour. With this update, the router now responds to SIGTERM signals during initialization. ( BZ#2098230 ) 1.9.30.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.31. RHBA-2022:6901 - OpenShift Container Platform 4.10.37 bug fix update Issued: 2022-10-18 OpenShift Container Platform release 4.10.37, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:6901 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:6899 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.37 --pullspecs 1.9.31.1. Bug fixes Previously, adding an IP address to one or more control plane nodes caused the etcd cluster Operator to fail to regenerate etcd serving certificates for the node. With this update, the etcd cluster Operator regenerates serving certificates for changes to an existing node. ( OCPBUGS-1758 ) 1.9.31.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.32. RHBA-2022:7035 - OpenShift Container Platform 4.10.38 bug fix update Issued: 2022-10-25 OpenShift Container Platform release 4.10.38, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:7035 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:7033 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.38 --pullspecs 1.9.32.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.33. RHSA-2022:7211 - OpenShift Container Platform 4.10.39 bug fix and security update Issued: 2022-11-01 OpenShift Container Platform release 4.10.39, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2022:7211 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:7210 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.39 --pullspecs 1.9.33.1. Notable technical changes With this release, when the service account issuer is changed to a custom one, existing bound service tokens are no longer invalidated immediately. Instead, when the service account issuer is changed, the service account issuer continues to be trusted for 24 hours. For more information, see Configuring bound service account tokens using volume projection . 1.9.33.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.34. RHBA-2022:7298 - OpenShift Container Platform 4.10.40 bug fix update Issued: 2022-11-09 OpenShift Container Platform release 4.10.40, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:7298 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:7297 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.40 --pullspecs 1.9.34.1. Bug fixes Before this update, the noAllowedAddressPairs setting applied to all subnets on the same network. With this update, the noAllowedAddressPairs setting now only applies to its matching subnet. ( OCPBUGS-1951 ) 1.9.34.2. Notable technical changes The Cloud Credential Operator utility ( ccoctl ) now creates secrets that use regional endpoints for the AWS Security Token Service (AWS STS) . This approach aligns with AWS recommended best practices. 1.9.34.3. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.35. RHBA-2022:7866 - OpenShift Container Platform 4.10.41 bug fix and security update Issued: 2022-11-18 OpenShift Container Platform release 4.10.41, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:7866 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:7865 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.41 --pullspecs 1.9.35.1. Notable technical changes With this release, when you delete GCP resources with the Cloud Credential Operator utility , you must specify the directory containing the files for the component CredentialsRequest objects. 1.9.35.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.36. RHBA-2022:8496 - OpenShift Container Platform 4.10.42 bug fix update Issued: 2022-11-22 OpenShift Container Platform release 4.10.42 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:8496 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:8495 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.42 --pullspecs 1.9.36.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.37. RHBA-2022:8623 - OpenShift Container Platform 4.10.43 bug fix update Issued: 2022-11-29 OpenShift Container Platform release 4.10.43 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:8623 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:8622 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.43 --pullspecs 1.9.37.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.38. RHBA-2022:8882 - OpenShift Container Platform 4.10.45 bug fix update Issued: 2022-12-14 OpenShift Container Platform release 4.10.45 is now available. The bug fixes that are included in the update are listed in the RHBA-2022:8882 advisory. The RPM packages that are included in the update are provided by the RHBA-2022:8881 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.45 --pullspecs 1.9.38.1. Bug fixes Previously, some object storage instances responded with 204 No Content when no content displayed. The Red Hat OpenStack Platform (RHOSP) SDK used in OpenShift Container Platform did not handle 204s correctly. With this update, the installation program works around the issue when there are zero items to list. ( OCPBUGS-4160 ) 1.9.38.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.39. RHBA-2022:9099 - OpenShift Container Platform 4.10.46 bug fix and security update Issued: 2023-01-04 OpenShift Container Platform release 4.10.46, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHBA-2022:9099 advisory. The RPM packages that are included in the update are provided by the RHSA-2022:9098 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.46 --pullspecs 1.9.39.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.40. RHSA-2023:0032 - OpenShift Container Platform 4.10.47 bug fix and security update Issued: 2023-01-04 OpenShift Container Platform release 4.10.47, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2023:0032 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:0031 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.47 --pullspecs 1.9.40.1. Enhancements IPv6 unsolicited neighbor advertisements and IPv4 gratuitous address resolution protocol now default on the SR-IOV CNI plugin. Pods created with the Single Root I/O Virtualization (SR-IOV) CNI plugin, where the IP address management CNI plugin has assigned IPs, now send IPv6 unsolicited neighbor advertisements and/or IPv4 gratuitous address resolution protocol by default onto the network. This enhancement notifies hosts of the new pod's MAC address for a particular IP to refresh ARP/NDP caches with the correct information. For more information, see Supported devices . 1.9.40.2. Bug fixes Previously, in CoreDNS v1.7.1, all upstream cache refreshes used DNSSEC. Bufsize was hardcoded to 2048 bytes for the upstream query, causing some DNS upstream queries to break when there were UDP Payload limits within the networking infrastructure. With this update, OpenShift Container Platform always uses bufsize 512 for upstream cache requests as that is the bufsize specified in the Corefile. Customers might be impacted if they rely on the incorrect functionality of bufsize 2048 for upstream DNS requests. ( OCPBUGS-2902 ) Previously, OpenShift Container Platform did not handle object storage instances that responded with 204 No Content . This caused problems for the Red Hat OpenStack Platform (RHOSP) SDK. With this update, the installation program works around the issue when there are zero objects to list in a Swift container. ( OCPBUGS-5112 ) 1.9.40.3. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.41. RHSA-2023:0241 - OpenShift Container Platform 4.10.50 bug fix and security update Issued: 2023-01-24 OpenShift Container Platform release 4.10.50, which includes security updates, is now available. The bug fixes that are included in the update are listed in the RHSA-2023:0241 advisory. The RPM packages that are included in the update are provided by the RHBA-2023:0240 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.50 --pullspecs 1.9.41.1. Bug fixes Previously, when rotating Red Hat OpenStack Platform (RHOSP) credentials, the Cinder Container Storage Interface driver would continue to use old credentials. Using any old credentials that were invalid would cause all volume operations to fail. With this update, the Cinder Container Storage Interface driver is updated automatically when the Red Hat OpenStack Platform (RHOSP) credentials are updated. ( OCPBUGS-4717 ) * Previously, pod failures were artificially extending the validity period of certificates causing them to incorrectly rotate. With this update, the certificate validity period is accurately determined, which helps certificates to rotate correctly. ( BZ#2020484 ) 1.9.41.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.42. RHSA-2023:0561 - OpenShift Container Platform 4.10.51 bug fix and security update Issued: 2023-02-08 OpenShift Container Platform release 4.10.51, which includes security updates, is now available. Bug fixes included in the update are listed in the RHSA-2023:0561 advisory. RPM packages included in the update are provided by the RHSA-2023:0560 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.51 --pullspecs 1.9.42.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.43. RHSA-2023:0698 - OpenShift Container Platform 4.10.52 bug fix and security update Issued: 2023-02-15 OpenShift Container Platform release 4.10.52, which includes security updates, is now available. Bug fixes included in the update are listed in the RHSA-2023:0698 advisory. RPM packages included in the update are provided by the RHSA-2023:0697 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.52 --pullspecs 1.9.43.1. Bug fixes Previously, when a Redfish system features a Settings URI, the Ironic provisioning service always attempts to use this URI to make changes to boot related BIOS settings. However, bare-metal provisioning fails if the Baseboard Management Controller (BMC) features a Settings URI but does not support changing a particular BIOS setting by using this Settings URI. In OpenShift Container Platform 4.10 and later, if a system features a Settings URI, Ironic verifies that it can change a particular BIOS setting by using the Settings URI before proceeding. Otherwise, Ironic implements the change by using the System URI. This additional logic ensures that Ironic can apply boot-related BIOS setting changes and bare-metal provisioning can succeed. ( OCPBUGS-6886 ) Previously due to a missing definition for spec.provider , the Operator details page failed when trying to show ClusterServiceVersion . With this update, the user interface works without spec.provider and the Operator details page does not fail. ( OCPBUGS-6690 ) 1.9.43.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.44. RHSA-2023:0698 - OpenShift Container Platform 4.10.53 bug fix and security update Issued: 2023-03-01 OpenShift Container Platform release 4.10.53, which includes security updates, is now available. Bug fixes included in the update are listed in the RHSA-2023:0899 advisory. RPM packages included in the update are provided by the RHBA-2023:0898 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.53 --pullspecs 1.9.44.1. Bug fixes In order to be compatible with OpenStack clouds that do not have Swift installed, Cluster Image Registry Operator (CIRO) has a mechanism for automatically choosing the storage back-end during the first boot. If Swift is available, Swift is used. Otherwise, a persistent volume claim (PVC) is issued and block storage is used. Previously, the CIRO would fall back to using a PVC when it failed to reach Swift. In particular, a lack of connectivity during the first boot would make CIRO fall back to using a PVC. With this change, a failure to reach the OpenStack API, or other incidental failures, cause CIRO to retry the probe. The fallback to PVC occurs only if the OpenStack catalog is correctly found, and it does not contain object storage, or if the current user does not have permission to list containers. ( OCPBUGS-5974 ) Previously, the User Provisioned Infrastructure (UPI) did not create a server group for compute machines. OpenShift Container Platform 4.10 updates the UPI script, so that the script creates a server group for compute machines. The UPI script installation method now aligns with the installer-provisioned installation (IPI) method. ( OCPBUGS-2731 ) 1.9.44.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.45. RHSA-2023:1154 - OpenShift Container Platform 4.10.54 bug fix and security update Issued: 2023-03-15 OpenShift Container Platform release 4.10.54, which includes security updates, is now available. Bug fixes included in the update are listed in the RHSA-2023:1154 advisory. RPM packages included in the update are provided by the RHBA-2023:1153 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.54 --pullspecs 1.9.45.1. Bug fixes Previously, when editing any pipeline in the OpenShift Container Platform console, the correct data was not rendered in the Pipeline builder and YAML view configuration options, which prevented you from editing the pipeline in the Pipeline builder . With this update, data is parsed correctly and you can edit the pipeline using the builder. ( OCPBUGS-7657 ) 1.9.45.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.46. RHSA-2023:1392 - OpenShift Container Platform 4.10.55 bug fix and security update Issued: 2023-03-29 OpenShift Container Platform release 4.10.55, which includes security updates, is now available. Bug fixes included in the update are listed in the RHSA-2023:1392 advisory. RPM packages included in the update are provided by the RHBA-2023:1391 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.55 --pullspecs 1.9.46.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.47. RHSA-2023:1656 - OpenShift Container Platform 4.10.56 bug fix and security update Issued: 2023-04-12 OpenShift Container Platform release 4.10.56, which includes security updates, is now available. Bug fixes included in the update are listed in the RHSA-2023:1656 advisory. RPM packages included in the update are provided by the RHSA-2023:1655 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.56 --pullspecs 1.9.47.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.48. RHBA-2023:1782 - OpenShift Container Platform 4.10.57 bug fix update Issued: 2023-04-19 OpenShift Container Platform release 4.10.57 is now available. Bug fixes included in the update are listed in the RHBA-2023:1782 advisory. There are no RPM packages for this update. You can view the container images in this release by running the following command: USD oc adm release info 4.10.57 --pullspecs 1.9.48.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.49. RHBA-2023:1867 - OpenShift Container Platform 4.10.58 bug fix and security update Issued: 2023-04-26 OpenShift Container Platform release 4.10.58 is now available. Bug fixes included in the update are listed in the RHBA-2023:1867 advisory. RPM packages included in the update are provided by the RHSA-2023:1866 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.58 --pullspecs 1.9.49.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.50. RHBA-2023:2018 - OpenShift Container Platform 4.10.59 bug fix update Issued: 2023-05-03 OpenShift Container Platform release 4.10.59 is now available. Bug fixes included in the update are listed in the RHBA-2023:2018 advisory. RPM packages included in the update are provided by the RHSA-2023:2017 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.59 --pullspecs 1.9.50.1. Bug fixes Previously, the topology sidebar did not display updated information. When you updated the resources directly from the topology sidebar, you had to reopen the sidebar to see the changes. With this fix, the updated resources are displayed correctly. As a result, you can see the latest changes directly in the topology sidebar. ( OCPBUGS-12438 ) Previously, when creating a Secret , the Start Pipeline model created an invalid JSON value. As a result, the Secret was unusable and the PipelineRun could fail. With this fix, the Start Pipeline model creates a valid JSON value for the secret. Now, you can create valid secrets while starting a pipeline. ( OCPBUGS-7961 ) 1.9.50.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.51. RHBA-2023:3217 - OpenShift Container Platform 4.10.60 bug fix and security update Issued: 2023-05-24 OpenShift Container Platform release 4.10.60, which includes security updates, is now available. Bug fixes included in the update are listed in the RHBA-2023:3217 advisory. RPM packages included in the update are provided by the RHSA-2023:3216 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.60 --pullspecs 1.9.51.1. Features 1.9.51.1.1. Controls for the verbosity of MetalLB logs With this release, you can control the verbosity of MetalLB logs. You can control logging levels by using the following values for the logLevel specification in the MetalLB custom resource (CR): all debug info warn error none For example, you can specify the debug value to include diagnostic logging information that is helpful for troubleshooting. For more information about logging levels for MetalLB, see Setting the MetalLB logging levels ( OCPBUGS-11861 ) 1.9.51.2. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster within a minor version by using the CLI for instructions. 1.9.52. RHSA-2023:3363 - OpenShift Container Platform 4.10.61 bug fix and security update Issued: 2023-06-07 OpenShift Container Platform release 4.10.61, which includes security updates, is now available. Bug fixes included in the update are listed in the RHSA-2023:3363 advisory. RPM packages included in the update are provided by the RHSA-2023:3362 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.61 --pullspecs 1.9.52.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.53. RHSA-2023:3626 - OpenShift Container Platform 4.10.62 bug fix and security update Issued: 2023-06-23 OpenShift Container Platform release 4.10.62, which includes security updates, is now available. Bug fixes included in the update are listed in the RHBA-2023:3626 advisory. RPM packages included in the update are provided by the RHSA-2023:3625 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.62 --pullspecs 1.9.53.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.54. RHSA-2023:3911 - OpenShift Container Platform 4.10.63 bug fix and security update Issued: 2023-07-06 OpenShift Container Platform release 4.10.63, which includes security updates, is now available. This update includes a Red Hat security bulletin for customers who run OpenShift Container Platform in FIPS mode. For more information, see RHSB-2023:001 . Bug fixes included in the update are listed in the RHSA-2023:3911 advisory. RPM packages included in the update are provided by the RHSA-2023:3910 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.63 --pullspecs 1.9.54.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.55. RHBA-2023:4217 - OpenShift Container Platform 4.10.64 bug fix update Issued: 2023-07-26 OpenShift Container Platform release 4.10.64 is now available. Bug fixes included in the update are listed in the RHBA-2023:4217 advisory. RPM packages included in the update are provided by the RHBA-2023:4219 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.64 --pullspecs 1.9.55.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.56. RHBA-2023:4445 - OpenShift Container Platform 4.10.65 bug fix update Issued: 2023-08-09 OpenShift Container Platform release 4.10.65 is now available. Bug fixes included in the update are listed in the RHBA-2023:4445 advisory. RPM packages included in the update are provided by the RHBA-2023:4447 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.65 --pullspecs 1.9.56.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.57. RHBA-2023:4667 - OpenShift Container Platform 4.10.66 bug fix update Issued: 2023-08-23 OpenShift Container Platform release 4.10.66 is now available. Bug fixes included in the update are listed in the RHBA-2023:4667 advisory. RPM packages included in the update are provided by the RHBA-2023:4669 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.66 --pullspecs 1.9.57.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. 1.9.58. RHBA-2023:4896 - OpenShift Container Platform 4.10.67 bug fix and security update Issued: 2023-09-06 OpenShift Container Platform release 4.10.67, which includes security updates, is now available. Bug fixes included in the update are listed in the RHBA-2023:4896 advisory. RPM packages included in the update are provided by the RHSA-2023:4898 advisory. You can view the container images in this release by running the following command: USD oc adm release info 4.10.67 --pullspecs 1.9.58.1. Updating To update an existing OpenShift Container Platform 4.10 cluster to this latest release, see Updating a cluster using the CLI for instructions. | [
"hub: params: - name: enable-devconsole-integration value: 'false'",
"[service] service.chronyd=stop,disable",
"az ad app list --filter \"displayname eq '<app_registration_name>'\" --query '[].objectId'",
"[ \"038c2538-7c40-49f5-abe5-f59c59c29244\" ]",
"az ad app delete --id 038c2538-7c40-49f5-abe5-f59c59c29244",
"## Snippet to remove unauthenticated group from all the cluster role bindings for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ; do ### Find the index of unauthenticated group in list of subjects index=USD(oc get clusterrolebinding USD{clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name==\"system:unauthenticated\") | index(true)'); ### Remove the element at index from subjects array patch clusterrolebinding USD{clusterrolebinding} --type=json --patch \"[{'op': 'remove','path': '/subjects/USDindex'}]\"; done",
"oc adm release info 4.10.3 --pullspecs",
"oc adm release info 4.10.4 --pullspecs",
"oc adm release info 4.10.5 --pullspecs",
"oc adm release info 4.10.6 --pullspecs",
"oc adm release info 4.10.8 --pullspecs",
"oc adm release info 4.10.9 --pullspecs",
"oc adm release info 4.10.10 --pullspecs",
"oc adm release info 4.10.11 --pullspecs",
"oc adm release info 4.10.12 --pullspecs",
"oc adm release info 4.10.13 --pullspecs",
"oc adm release info 4.10.14 --pullspecs",
"oc adm release info 4.10.15 --pullspecs",
"oc adm release info 4.10.16 --pullspecs",
"oc adm release info 4.10.17 --pullspecs",
"oc adm release info 4.10.18 --pullspecs",
"oc adm release info 4.10.20 --pullspecs",
"oc adm release info 4.10.21 --pullspecs",
"oc adm release info 4.10.22 --pullspecs",
"oc adm release info 4.10.23 --pullspecs",
"2022-01-21T00:14:44.697Z INFO controllers.ClusterGroupUpgrade Upgrade is completed 2022-01-21T00:14:44.892Z ERROR controller-runtime.manager.controller.clustergroupupgrade Reconciler error {\"reconciler group\": \"ran.openshift.io\", \"reconciler kind\": \"ClusterGroupUpgrade\", \"name\": \"timeout\", \"namespace\": \"default\", \"error\": \"Operation cannot be fulfilled on clustergroupupgrades.ran.openshift.io \\\"timeout\\\": the object has been modified; please apply your changes to the latest version and try again\"} sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:253 sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2 /go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:214",
"oc adm release info 4.10.24 --pullspecs",
"oc adm release info 4.10.25 --pullspecs",
"oc adm release info 4.10.26 --pullspecs",
"oc adm release info 4.10.28 --pullspecs",
"oc adm release info 4.10.30 --pullspecs",
"oc adm release info 4.10.31 --pullspecs",
"oc adm release info 4.10.32 --pullspecs",
"oc adm release info 4.10.33 --pullspecs",
"oc adm release info 4.10.34 --pullspecs",
"oc adm release info 4.10.35 --pullspecs",
"oc adm release info 4.10.36 --pullspecs",
"oc adm release info 4.10.37 --pullspecs",
"oc adm release info 4.10.38 --pullspecs",
"oc adm release info 4.10.39 --pullspecs",
"oc adm release info 4.10.40 --pullspecs",
"oc adm release info 4.10.41 --pullspecs",
"oc adm release info 4.10.42 --pullspecs",
"oc adm release info 4.10.43 --pullspecs",
"oc adm release info 4.10.45 --pullspecs",
"oc adm release info 4.10.46 --pullspecs",
"oc adm release info 4.10.47 --pullspecs",
"oc adm release info 4.10.50 --pullspecs",
"oc adm release info 4.10.51 --pullspecs",
"oc adm release info 4.10.52 --pullspecs",
"oc adm release info 4.10.53 --pullspecs",
"oc adm release info 4.10.54 --pullspecs",
"oc adm release info 4.10.55 --pullspecs",
"oc adm release info 4.10.56 --pullspecs",
"oc adm release info 4.10.57 --pullspecs",
"oc adm release info 4.10.58 --pullspecs",
"oc adm release info 4.10.59 --pullspecs",
"oc adm release info 4.10.60 --pullspecs",
"oc adm release info 4.10.61 --pullspecs",
"oc adm release info 4.10.62 --pullspecs",
"oc adm release info 4.10.63 --pullspecs",
"oc adm release info 4.10.64 --pullspecs",
"oc adm release info 4.10.65 --pullspecs",
"oc adm release info 4.10.66 --pullspecs",
"oc adm release info 4.10.67 --pullspecs"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/release_notes/ocp-4-10-release-notes |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/working_with_vaults_in_identity_management/proc_providing-feedback-on-red-hat-documentation_working-with-vaults-in-idm |
31.2. Configure Cross-Datacenter Replication | 31.2. Configure Cross-Datacenter Replication 31.2.1. Configure Cross-Datacenter Replication (Remote Client-Server Mode) In Red Hat JBoss Data Grid's Remote Client-Server mode, cross-datacenter replication is set up as follows: Procedure 31.1. Set Up Cross-Datacenter Replication Set Up RELAY Add the following configuration to the standalone.xml file to set up RELAY : The RELAY protocol creates an additional stack (running parallel to the existing TCP stack) to communicate with the remote site. If a TCP based stack is used for the local cluster, two TCP based stack configurations are required: one for local communication and one to connect to the remote site. For an illustration, see Section 31.1, "Cross-Datacenter Replication Operations" Set Up Sites Use the following configuration in the standalone.xml file to set up sites for each distributed cache in the cluster: Configure Local Site Transport Add the name of the local site in the transport element to configure transport: Report a bug 31.2.2. Configure Cross-Data Replication (Library Mode) 31.2.2.1. Configure Cross-Datacenter Replication Declaratively When configuring Cross-Datacenter Replication, the relay.RELAY2 protocol creates an additional stack (running parallel to the existing TCP stack) to communicate with the remote site. If a TCP -based stack is used for the local cluster, two TCP based stack configurations are required: one for local communication and one to connect to the remote site. In JBoss Data Grid's Library mode, cross-datacenter replication is set up as follows: Procedure 31.2. Setting Up Cross-Datacenter Replication Configure the Local Site Add the site element to the global element to add the local site (in this example, the local site is named LON ). Cross-site replication requires a non-default JGroups configuration. Add the transport element and set up the path to the configuration file as the configurationFile property. In this example, the JGroups configuration file is named jgroups-with-relay.xml . Configure the cache in site LON to back up to the sites NYC and SFO . Configure the back up caches: Configure the cache in site NYC to receive back up data from LON . Configure the cache in site SFO to receive back up data from LON . Add the Contents of the Configuration File As a default, Red Hat JBoss Data Grid includes JGroups configuration files such as default-configs/default-jgroups-tcp.xml and default-configs/default-jgroups-udp.xml in the infinispan-embedded- {VERSION} .jar package. Copy the JGroups configuration to a new file (in this example, it is named jgroups-with-relay.xml ) and add the provided configuration information to this file. Note that the relay.RELAY2 protocol configuration must be the last protocol in the configuration stack. Configure the relay.xml File Set up the relay.RELAY2 configuration in the relay.xml file. This file describes the global cluster configuration. Configure the Global Cluster The file jgroups-global.xml referenced in relay.xml contains another JGroups configuration which is used for the global cluster: communication between sites. The global cluster configuration is usually TCP -based and uses the TCPPING protocol (instead of PING or MPING ) to discover members. Copy the contents of default-configs/default-jgroups-tcp.xml into jgroups-global.xml and add the following configuration in order to configure TCPPING : Replace the hostnames (or IP addresses) in TCPPING.initial_hosts with those used for your site masters. The ports ( 7800 in this example) must match the TCP.bind_port . For more information about the TCPPING protocol, see Section 26.2.1.3, "Using the TCPPing Protocol" Report a bug 31.2.2.2. Configure Cross-Datacenter Replication Programmatically The programmatic method to configure cross-datacenter replication in Red Hat JBoss Data Grid is as follows: Procedure 31.3. Configure Cross-Datacenter Replication Programmatically Identify the Node Location Declare the site the node resides in: Configure JGroups Configure JGroups to use the RELAY protocol: Set Up the Remote Site Set up JBoss Data Grid caches to replicate to the remote site: Optional: Configure the Backup Caches JBoss Data Grid implicitly replicates data to a cache with same name as the remote site. If a backup cache on the remote site has a different name, users must specify a backupFor cache to ensure data is replicated to the correct cache. Note This step is optional and only required if the remote site's caches are named differently from the original caches. Configure the cache in site NYC to receive backup data from LON : Configure the cache in site SFO to receive backup data from LON : Add the Contents of the Configuration File As a default, Red Hat JBoss Data Grid includes JGroups configuration files such as default-configs/default-jgroups-tcp.xml and default-configs/default-jgroups-udp.xml in the infinispan-embedded- {VERSION} .jar package. Copy the JGroups configuration to a new file (in this example, it is named jgroups-with-relay.xml ) and add the provided configuration information to this file. Note that the relay.RELAY2 protocol configuration must be the last protocol in the configuration stack. Configure the relay.xml File Set up the relay.RELAY2 configuration in the relay.xml file. This file describes the global cluster configuration. Configure the Global Cluster The file jgroups-global.xml referenced in relay.xml contains another JGroups configuration which is used for the global cluster: communication between sites. The global cluster configuration is usually TCP -based and uses the TCPPING protocol (instead of PING or MPING ) to discover members. Copy the contents of default-configs/default-jgroups-tcp.xml into jgroups-global.xml and add the following configuration in order to configure TCPPING : Replace the hostnames (or IP addresses) in TCPPING.initial_hosts with those used for your site masters. The ports ( 7800 in this example) must match the TCP.bind_port . For more information about the TCPPING protocol, see Section 26.2.1.3, "Using the TCPPing Protocol" Report a bug | [
"<subsystem xmlns=\"urn:jboss:domain:jgroups:1.2\" default-stack=\"udp\"> <stack name=\"udp\"> <transport type=\"UDP\" socket-binding=\"jgroups-udp\"/> <!-- Additional configuration elements here --> <relay site=\"LON\"> <remote-site name=\"NYC\" stack=\"tcp\" cluster=\"global\"/> <remote-site name=\"SFO\" stack=\"tcp\" cluster=\"global\"/> <property name=\"relay_multicasts\">false</property> </relay> </stack> </subsystem>",
"<distributed-cache> <!-- Additional configuration elements here --> <backups> <backup site=\"{FIRSTSITENAME}\" strategy=\"{SYNC/ASYNC}\" /> <backup site=\"{SECONDSITENAME}\" strategy=\"{SYNC/ASYNC}\" /> </backups> </distributed-cache>",
"<transport executor=\"infinispan-transport\" lock-timeout=\"60000\" cluster=\"LON\" stack=\"udp\"/>",
"<infinispan> <global> <site local=\"SFO\" /> <transport clusterName=\"default\"> <properties> <property name=\"configurationFile\" value=\"jgroups-with-relay.xml\"/> </properties> </transport> <!-- Additional configuration information here --> </global> <!-- Additional configuration information here --> <namedCache name=\"lonBackup\"> <sites> <backupFor remoteSite=\"LON\" remoteCache=\"lon\" /> </sites> </namedCache> </infinispan>",
"<config> <relay.RELAY2 site=\"LON\" config=\"relay.xml\" relay_multicasts=\"false\" /> </config>",
"<RelayConfiguration> <sites> <site name=\"LON\" id=\"0\"> <bridges> <bridge config=\"jgroups-global.xml\" name=\"global\"/> </bridges> </site> <site name=\"NYC\" id=\"1\"> <bridges> <bridge config=\"jgroups-global.xml\" name=\"global\"/> </bridges> </site> <site name=\"SFO\" id=\"2\"> <bridges> <bridge config=\"jgroups-global.xml\" name=\"global\"/> </bridges> </site> </sites> </RelayConfiguration>",
"<config> <TCP bind_port=\"7800\" ... /> <TCPPING initial_hosts=\"lon.hostname[7800],nyc.hostname[7800],sfo.hostname[7800]\" ergonomics=\"false\" /> <!-- Rest of the protocols --> </config>",
"globalConfiguration.site().localSite(\"LON\");",
"globalConfiguration.transport().addProperty(\"configurationFile\", jgroups-with-relay.xml);",
"ConfigurationBuilder lon = new ConfigurationBuilder(); lon.sites().addBackup() .site(\"NYC\") .backupFailurePolicy(BackupFailurePolicy.WARN) .strategy(BackupConfiguration.BackupStrategy.SYNC) .replicationTimeout(12000) .sites().addInUseBackupSite(\"NYC\") .sites().addBackup() .site(\"SFO\") .backupFailurePolicy(BackupFailurePolicy.IGNORE) .strategy(BackupConfiguration.BackupStrategy.ASYNC) .sites().addInUseBackupSite(\"SFO\")",
"ConfigurationBuilder NYCbackupOfLon = new ConfigurationBuilder(); NYCbackupOfLon.sites().backupFor().remoteCache(\"lon\").remoteSite(\"LON\");",
"ConfigurationBuilder SFObackupOfLon = new ConfigurationBuilder(); SFObackupOfLon.sites().backupFor().remoteCache(\"lon\").remoteSite(\"LON\");",
"<config> <!-- Additional configuration information here --> <relay.RELAY2 site=\"LON\" config=\"relay.xml\" relay_multicasts=\"false\" /> </config>",
"<RelayConfiguration> <sites> <site name=\"LON\" id=\"0\"> <bridges> <bridge config=\"jgroups-global.xml\" name=\"global\"/> </bridges> </site> <site name=\"NYC\" id=\"1\"> <bridges> <bridge config=\"jgroups-global.xml\" name=\"global\"/> </bridges> </site> <site name=\"SFO\" id=\"2\"> <bridges> <bridge config=\"jgroups-global.xml\" name=\"global\"/> </bridges> </site> </sites> </RelayConfiguration>",
"<config> <TCP bind_port=\"7800\" <!-- Additional configuration information here --> /> <TCPPING initial_hosts=\"lon.hostname[7800],nyc.hostname[7800],sfo.hostname[7800]\" ergonomics=\"false\" /> <!-- Rest of the protocols --> </config>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-Configure_Cross-Datacenter_Replication |
Chapter 1. Introduction | Chapter 1. Introduction Red Hat Virtualization Manager provides a Representational State Transfer (REST) API. The API provides software developers and system administrators with control over their Red Hat Virtualization environment outside of the standard web interface. The REST API is useful for developers and administrators who aim to integrate the functionality of a Red Hat Virtualization environment with custom scripts or external applications that access the API via the standard Hypertext Transfer Protocol (HTTP). The benefits of the REST API are: Broad client support - Any programming language, framework, or system with support for HTTP protocol can use the API; Self descriptive - Client applications require minimal knowledge of the virtualization infrastructure as many details are discovered at runtime; Resource-based model - The resource-based REST model provides a natural way to manage a virtualization platform. This provides developers and administrators with the ability to: Integrate with enterprise IT systems. Integrate with third-party virtualization software. Perform automated maintenance or error checking tasks. Automate repetitive tasks in a Red Hat Virtualization environment with scripts. This documentation acts as a reference to the Red Hat Virtualization Manager REST API. It aims to provide developers and administrators with instructions and examples to help harness the functionality of their Red Hat Virtualization environment through the REST API either directly or using the provided Python libraries. 1.1. Representational State Transfer Representational State Transfer (REST) is a design architecture that focuses on resources for a specific service and their representations. A resource representation is a key abstraction of information that corresponds to one specific managed element on a server. A client sends a request to a server element located at a Uniform Resource Identifier (URI) and performs operations with standard HTTP methods, such as GET , POST , PUT , and DELETE . This provides a stateless communication between the client and server where each request acts independent of any other request and contains all necessary information to complete the request. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/chap-introduction |
Chapter 10. Configuring printing | Chapter 10. Configuring printing The Common UNIX Printing System (CUPS) manages printing on Red Hat Enterprise Linux. Users configure printers in CUPS on their host to print. Additionally, you can share printers in CUPS to use the host as a print server. CUPS supports printing to: AirPrintTM and IPP EverywhereTM printers Network and local USB printers with legacy PostScript Printer Description (PPD)-based drivers 10.1. Installing and configuring CUPS You can use CUPS to print from a local host. You can also use this host to share printers in the network and act as a print server. Procedure Install the cups package: If you configure a CUPS as a print server, edit the /etc/cups/cupsd.conf file, and make the following changes: If you want to remotely configure CUPS or use this host as a print server, configure on which IP addresses and ports the service listens: By default, CUPS listens only on localhost interfaces ( 127.0.0.1 and ::1 ). Specify IPv6 addresses in square brackets. Important Do not configure CUPS to listen on interfaces that allow access from untrustworthy networks, such as the internet. Configure which IP ranges can access the service by allowing the respective IP ranges in the <Location /> directive: In the <Location /admin> directive, configure which IP addresses and ranges can access the CUPS administration services: With these settings, only the hosts with the IP addresses 192.0.2.15 and 2001:db8:1::22 can access the administration services. Optional: Configure IP addresses and ranges that are allowed to access the configuration and log files in the web interface: If you run the firewalld service and want to configure remote access to CUPS, open the CUPS port in firewalld : If you run CUPS on a host with multiple interfaces, consider limiting the access to the required networks. Enable and start the cups service: Verification Use a browser, and access http:// <hostname> :631 . If you can connect to the web interface, CUPS works. Note that certain features, such as the Administration tab, require authentication and an HTTPS connection. By default, CUPS uses a self-signed certificate for HTTPS access and, consequently, the connection is not secure when you authenticate. steps Configuring TLS encryption on a CUPS server Optional: Granting administration permissions to manage a CUPS server in the web interface Adding a printer to CUPS by using the web interface Using and configuring firewalld 10.2. Configuring TLS encryption on a CUPS server CUPS supports TLS-encrypted connections and, by default, the service enforces encrypted connections for all requests that require authentication. If no certificates are configured, CUPS creates a private key and a self-signed certificate. This is only sufficient if you access CUPS from the local host itself. For a secure connection over the network, use a server certificate that is signed by a certificate authority (CA). Warning Without encryption or with a self-signed certificates, a man-in-the-middle (MITM) attack can disclose, for example: Credentials of administrators when configuring CUPS using the web interface Confidential data when sending print jobs over the network Prerequisites CUPS is configured . You created a private key , and a CA issued a server certificate for it. If an intermediate certificate is required to validate the server certificate, attach the intermediate certificate to the server certificate. The private key is not protected by a password because CUPS provides no option to enter the password when the service reads the key. The Canonical Name ( CN ) or Subject Alternative Name (SAN) field in the certificate matches one of the following: The fully-qualified domain name (FQDN) of the CUPS server An alias that the DNS resolves to the server's IP address The private key and server certificate files use the Privacy Enhanced Mail (PEM) format. Clients trust the CA certificate. Procedure Edit the /etc/cups/cups-files.conf file, and add the following setting to disable the automatic creation of self-signed certificates: Remove the self-signed certificate and private key: Optional: Display the FQDN of the server: Optional: Display the CN and SAN fields of the certificate: If the CN or SAN fields in the server certificate contains an alias that is different from the server's FQDN, add the ServerAlias parameter to the /etc/cups/cupsd.conf file: In this case, use the alternative name instead of the FQDN in the rest of the procedure. Store the private key and server certificate in the /etc/cups/ssl/ directory, for example: Important CUPS requires that you name the private key <fqdn> .key and the server certificate file <fqdn> .crt . If you use an alias, you must name the files <alias> .key and <alias>.crt . Set secure permissions on the private key that enable only the root user to read this file: Because certificates are part of the communication between a client and the server before they establish a secure connection, any client can retrieve the certificates without authentication. Therefore, you do not need to set strict permissions on the server certificate file. Restore the SELinux context: By default, CUPS enforces encrypted connections only if a task requires authentication, for example when performing administrative tasks on the /admin page in the web interface. To enforce encryption for the entire CUPS server, add Encryption Required to all <Location> directives in the /etc/cups/cupsd.conf file, for example: Restart CUPS: Verification Use a browser, and access https:// <hostname> :631/admin/ . If the connection succeeds, you configured TLS encryption in CUPS correctly. If you configured that encryption is required for the entire server, access http:// <hostname> :631/ . CUPS returns an Upgrade Required error in this case. Troubleshooting Display the systemd journal entries of the cups service: If the journal contains an Unable to encrypt connection: Error while reading file error after you failed to connect to the web interface by using the HTTPS protocol, verify the name of the private key and server certificate file. Additional resources How to configure CUPS to use a CA-signed TLS certificate in RHEL (Red Hat Knowledgebase) 10.3. Granting administration permissions to manage a CUPS server in the web interface By default, members of the sys , root , and wheel groups can perform administration tasks in the web interface. However, certain other services use these groups as well. For example, members of the wheel groups can, by default, execute commands with root permissions by using sudo . To avoid that CUPS administrators gain unexpected permissions in other services, use a dedicated group for CUPS administrators. Prerequisites CUPS is configured . The IP address of the client you want to use has permissions to access the administration area in the web interface. Procedure Create a group for CUPS administrators: Add the users who should manage the service in the web interface to the cups-admins group: Update the value of the SystemGroup parameter in the /etc/cups/cups-files.conf file, and append the cups-admin group: If only the cups-admin group should have administrative access, remove the other group names from the parameter. Restart CUPS: Verification Use a browser, and access https:// <hostname_or_ip_address> :631/admin/ . Note You can access the administration area in the web UI only if you use the HTTPS protocol. Start performing an administrative task. For example, click Add printer . The web interface prompts for a username and password. To proceed, authenticate by using credentials of a user who is a member of the cups-admins group. If authentication succeeds, this user can perform administrative tasks. 10.4. Overview of packages with printer drivers Red Hat Enterprise Linux (RHEL) provides different packages with printer drivers for CUPS. The following is a general overview of these packages and for which vendors they contain drivers: Table 10.1. Driver package list Package name Drivers for printers cups Zebra, Dymo c2esp Kodak foomatic Brother, Canon, Epson, Gestetner, HP, Infotec, Kyocera, Lanier, Lexmark, NRG, Ricoh, Samsung, Savin, Sharp, Toshiba, Xerox, and others gutenprint-cups Brother, Canon, Epson, Fujitsu, HP, Infotec, Kyocera, Lanier, NRG, Oki, Minolta, Ricoh, Samsung, Savin, Xerox, and others hplip HP pnm2ppa HP splix Samsung, Xerox, and others Note that some packages can contain drivers for the same printer vendor or model but with different functionality. After installing the required package, you can display the list of drivers in the CUPS web interface or by using the lpinfo -m command. 10.5. Determining whether a printer supports driverless printing CUPS supports driverless printing, which means that you can print without providing any hardware-specific software for the printer model. For this, the printer must inform the client about its capabilities and use one of the following standards: AirPrintTM IPP EverywhereTM Mopria(R) Wi-Fi Direct Print Services You can use the ipptool utility to find out whether a printer supports driverless printing. Prerequisites The printer or remote print server supports the Internet Printing Protocol (IPP). The host can connect to the IPP port of the printer or remote print server. The default IPP port is 631. Procedure Query the ipp-versions-supported and document-format-supported attributes, and ensure that get-printer-attributes test passes: For a remote printer, enter: For a queue on a remote print server, enter: To ensure that driverless printing works, verify in the output: The get-printer-attributes test returns PASS . The IPP version that the printer supports is 2.0 or higher. The list of formats contains one of the following: application/pdf image/urf image/pwg-raster For color printers, the output contains one of the mentioned formats and, additionally, image/jpeg . steps: Add a printer to CUPS by using the web interface Add a printer to CUPS by using the lpadmin utility 10.6. Adding a printer to CUPS by using the web interface Before users can print through CUPS, you must add printers. You can use both network printers and printers that are directly attached to the CUPS host, for example over USB. You can add printers by using the CUPS driverless feature or by using a PostScript Printer Description (PPD) file. Note CUPS prefers driverless printing, and using drivers is deprecated. Red Hat Enterprise Linux (RHEL) does not provide the name service switch multicast DNS plug-in ( nss-mdns ), which resolves requests by querying an mDNS responder. Consequently, automatic discovery and installation for local driverless printers by using mDNS is not available in RHEL. To work around this problem, install single printers manually or use cups-browsed to automatically install a high amount of print queues that are available on a remote print server. Prerequisites CUPS is configured . You have permissions in CUPS to manage printers . If you use CUPS as a print server, you configured TLS encryption to securely transmit data over the network. The printer supports driverless printing , if you want to use this feature. Procedure Use a browser, and access https:// <hostname> :631/admin/ . You must connect to the web interface by using the HTTPS protocol. Otherwise, CUPS prevents you from authenticating in a later step for security reasons. Click Add printer . If you are not already authenticated, CUPS prompts for credentials of an administrative user. Enter the username and password of an authorized user. If you decide to not use driverless printing and the printer you want to add is detected automatically, select it, and click Continue . If the printer was not detected: Select the protocol that the printer supports. If your printer supports driverless printing and you want to use this feature, select the ipp or ipps protocol. Click Continue . Enter the URL to the printer or to the queue on a remote print server. Click Continue . Enter a name and, optionally, a description and location. If you use CUPS as a print server, and other clients should be able to print through CUPS on this printer, select also Share this printer . Select the printer manufacturer in the Make list. If the printer manufacturer is not on the list, select Generic or upload a PPD file for the printer. Click Continue . Select the printer model: If the printer supports driverless printing, select IPP Everywhere . Note that, if you previously installed printer-specific drivers locally, it is possible that the list also contains entries such as <printer_name> - IPP Everywhere . If the printer does not support driverless printing, select the model or upload the PPD file for the printer. Click Add Printer The settings and tabs on the Set printer options page depend on the driver and the features the printer supports. Use this page to set default options, such as for the paper size. Click Set default options . Verification Open the Printers tab in the web interface. Click on the printer's name. In the Maintenance list, select Print test page . Troubleshooting If you use driverless printing, and printing does not work, use the lpadmin utility to add the printer on the command line. For details, see Adding a printer to CUPS by using the lpadmin utility . 10.7. Adding a printer to CUPS by using the lpadmin utility Before users can print through CUPS, you must add printers. You can use both network printers and printers that are directly attached to the CUPS host, for example over USB. You can add printers by using the CUPS driverless feature or by using a PostScript Printer Description (PPD) file. Note CUPS prefers driverless printing, and using drivers is deprecated. Red Hat Enterprise Linux (RHEL) does not provide the name service switch multicast DNS plug-in ( nss-mdns ), which resolves requests by querying an mDNS responder. Consequently, automatic discovery and installation for local driverless printers by using mDNS is not available in RHEL. To work around this problem, install single printers manually or use cups-browsed to automatically install a high amount of print queues that are available on a remote print server. Prerequisites CUPS is configured . The printer supports driverless printing , if you want to use this feature. The printer accepts data on port 631 (IPP), 9100 (socket), or 515 (LPD). The port depends on the method you use to connect to the printer. Procedure Add the printer to CUPS: To add a printer with driverless support, enter: If the -m everywhere option does not work for your printer, try -m driverless: <uri> , for example: -m driverless: ipp://192.0.2.200/ipp/print . To add a queue from a remote print server with driverless support, enter: If the -m everywhere option does not work for your printer, try -m driverless: <uri> , for example: -m driverless: ipp://192.0.2.200/printers/example-queue . To add a printer with a driver in file, enter: To add a queue from a remote print server with a driver in a file, enter: To add a printer with a driver in the local driver database: List the drivers in the database: Add the printer with the URI to the driver in the database: These commands uses the following options: -p <printer_name> : Sets the name of the printer in CUPS. -E : Enables the printer and CUPS accepts jobs for it. Note that you must specify this option after -p . See the option's description in the man page for further details. -v <uri> : Sets the URI to the printer or remote print server queue. -m <driver_uri> : Sets the PPD file based on the provided driver URI obtained from the local driver database. -P <PPD_file> : Sets the path to the PPD file. Verification Display the available printers: Print a test page: 10.8. Performing maintenance and administration tasks on CUPS printers by using the web interface Printer administrators sometimes need to perform different tasks on a print server. For example: Maintenance tasks, such as temporary pausing a printer while a technician repairs a printer Administrative tasks, such as changing a printer's default settings You can perform these tasks by using the CUPS web interface. Prerequisites CUPS is configured . You have permissions in CUPS to manage printers . If you use CUPS as a print server, you configured TLS encryption to not send credentials in plain text over the network. The printer already exists in CUPS . Procedure Use a browser, and access https:// <hostname> :631/printers/ . You must connect to the web interface by using the HTTPS protocol. Otherwise, CUPS prevents you from authenticating in a later step for security reasons. Click on the name of the printer that you want to configure. Depending on whether you want to perform a maintenance or administration task, select the required action from the corresponding list: If you are not already authenticated, CUPS prompts for credentials of an administrative user. Enter the username and password of an authorized user. Perform the task. 10.9. Using Samba to print to a Windows print server with Kerberos authentication With the samba-krb5-printing wrapper, Active Directory (AD) users who are logged in to Red Hat Enterprise Linux (RHEL) can authenticate to Active Directory (AD) by using Kerberos and then print to a local CUPS print server that forwards the print job to a Windows print server. The benefit of this configuration is that the administrator of CUPS on RHEL does not need to store a fixed user name and password in the configuration. CUPS authenticates to AD with the Kerberos ticket of the user that sends the print job. Note Red Hat supports only submitting print jobs to CUPS from your local system, and not to re-share a printer on a Samba print server. Prerequisites The printer that you want to add to the local CUPS instance is shared on an AD print server. You joined the RHEL host as a member to the AD. CUPS is installed on RHEL, and the cups service is running. The PostScript Printer Description (PPD) file for the printer is stored in the /usr/share/cups/model/ directory. Procedure Install the samba-krb5-printing , samba-client , and krb5-workstation packages: Optional: Authenticate as a domain administrator and display the list of printers that are shared on the Windows print server: Optional: Display the list of CUPS models to identify the PPD name of your printer: You require the name of the PPD file when you add the printer in the step. Add the printer to CUPS: The command uses the following options: -p printer_name sets the name of the printer in CUPS. -v URI_to_Windows_printer sets the URI to the Windows printer. Use the following format: smb:// host_name / printer_share_name . -m PPD_file sets the PPD file the printer uses. -o auth-info-required=negotiate configures CUPS to use Kerberos authentication when it forwards print jobs to the remote server. -E enables the printer and CUPS accepts jobs for the printer. Verification Log into the RHEL host as an AD domain user. Authenticate as an AD domain user: Print a file to the printer you added to the local CUPS print server: 10.10. Using cups-browsed to locally integrate printers from a remote print server The cups-browsed service uses DNS service discovery (DNS-SD) and CUPS browsing to make all or a filtered subset of shared remote printers automatically available in a local CUPS service. For example, administrators can use this feature on workstations to make only printers from a trusted print server available in a print dialog of applications. It is also possible to configure cups-browsed to filter the browsed printers by certain criteria to reduce the number of listed printers if a print server shares a large number of printers. Note If the print dialog in an application uses other mechanisms than, for example DNS-SD, to list remote printers, cups-browsed has no influence. The cups-browsed service also does not prevent users from manually accessing non-listed printers. Prerequisites The CUPS service is configured on the local host . A remote CUPS print server exists, and the following conditions apply to this server: The server listens on an interface that is accessible from the client. The Allow from parameter in the server's <Location /> directive in the /etc/cups/cups.conf file allows access from the client's IP address. The server shares printers. Firewall rules allow access from the client to the CUPS port on the server. Procedure Edit the /etc/cups/cups-browsed.conf file, and make the following changes: Add BrowsePoll parameters for each remote CUPS server you want to poll: Append : <port> to the hostname or IP address if the remote CUPS server listens on a port different from 631. Optional: Configure a filter to limit which printers are shown in the local CUPS service. For example, to filter for queues whose name contain sales_ , add: You can filter by different field names, negate the filter, and match the exact values. For further details, see the parameter description and examples in the cups-browsed.conf(5) man page on your system. Optional: Change the polling interval and timeout to limit the number of browsing cycles: Increase both BrowseInterval and BrowseTimeout in the same ratio to avoid situations in which printers disappear from the browsing list. This mean, multiply the value of BrowseInterval by 5 or a higher integer, and use this result value for BrowseTimeout . By default, cups-browsed polls remote servers every 60 seconds and the timeout is 300 seconds. However, on print servers with many queues, these default values can cost many resources. Enable and start the cups-browsed service: Verification List the available printers: If the output for a printer contains implicitclass , cups-browsed manages the printer in CUPS. Additional resources cups-browsed.conf(5) man page on your system 10.11. Accessing the CUPS logs in the systemd journal By default, CUPS stores log messages in the systemd journal. This includes: Error messages Access log entries Page log entries Prerequisites CUPS is installed . Procedure Display the log entries: To display all log entries, enter: To display the log entries for a specific print job, enter: To display log entries within a specific time frame, enter: Replace YYYY with the year, MM with the month, and DD with the day. Additional resources journalctl(1) man page on your system 10.12. Configuring CUPS to store logs in files instead of the systemd journal By default, CUPS stores log messages in the systemd journal. Alternatively, you can configure CUPS to store log messages in files. Prerequisites CUPS is installed . Procedure Edit the /etc/cups/cups-files.conf file, and set the AccessLog , ErrorLog , and PageLog parameters to the paths where you want to store these log files: If you configure CUPS to store the logs in a directory other than /var/log/cups/ , set the cupsd_log_t SELinux context on this directory, for example: Restart the cups service: Verification Display the log files: If you configured CUPS to store the logs in a directory other than /var/log/cups/ , verify that the SELinux context on the log directory is cupsd_log_t : 10.13. Setting up a high-availability CUPS print server environment If your clients require access to printers without interruption, you can set up CUPS on multiple hosts and use the print queue browsing feature to provide high availability. Print clients then automatically configure print queues shared by the different print servers. If a client sends a print job to its local print queue, CUPS on the client routes the job to one of the print servers which processes the job and sends it to the printer. Procedure Set up CUPS on two or more servers: Install and configure CUPS . Enable TLS encryption . Add print queues to all CUPS instances by using the lpadmin utility or the web interface . If you use the web interface, ensure that you select the Share this printer option while you add the printer. The lpadmin utility enables this setting by default. Important For the high-availability scenario, each queue on one print server requires a queue with exactly the same queue name on the other servers. You can display the queue names on each server by using the lpstat -e command. Optional: You can configure the queues on each server to refer to different printers. On print clients: Edit the /etc/cups/cups-browsed.conf file, and add BrowsePoll directives for each CUPS print server: Enable and start both the cups and cups-browsed service: Verification Display the available printers on a client: The example output shows that the Demo-printer queue uses the implicitclass back end. As a result, cups-browsed routes print jobs for this queue to the hosts specified in the BrowsePoll directives on this client. Additional resources High-availability printing in Red Hat Enterprise Linux (Red Hat Knowledgebase) 10.14. Accessing the CUPS documentation CUPS provides browser-based access to the service's documentation that is installed on the CUPS server. This documentation includes: Administration documentation, such as for command-line printer administration and accounting Man pages Programming documentation, such as the administration API References Specifications Prerequisites CUPS is installed and running . The IP address of the client you want to use has permissions to access the web interface. Procedure Use a browser, and access http:// <hostname_or_ip_address> :631/help/ : Expand the entries in Online Help Documents , and select the documentation you want to read. | [
"yum install cups",
"Listen 192.0.2.1:631 Listen [2001:db8:1::1]:631",
"<Location /> Allow from 192.0.2.0/24 Allow from [2001:db8:1::1]/32 Order allow,deny </Location>",
"<Location /admin> Allow from 192.0.2.15/32 Allow from [2001:db8:1::22]/128 Order allow,deny </Location>",
"<Location /admin/conf> Allow from 192.0.2.15/32 Allow from [2001:db8:1::22]/128 </Location> <Location /admin/log> Allow from 192.0.2.15/32 Allow from [2001:db8:1::22]/128 </Location>",
"firewall-cmd --permanent --add-port=631/tcp firewall-cmd --reload",
"systemctl enable --now cups",
"CreateSelfSignedCerts no",
"rm /etc/cups/ssl/ <hostname> .crt /etc/cups/ssl/ <hostname> .key",
"hostname -f server.example.com",
"openssl x509 -text -in /etc/cups/ssl/ server.example.com.crt Certificate: Data: Subject: CN = server.example.com X509v3 extensions: X509v3 Subject Alternative Name: DNS:server.example.com",
"ServerAlias alternative_name.example.com",
"mv /root/server.key /etc/cups/ssl/ server.example.com.key mv /root/server.crt /etc/cups/ssl/ server.example.com.crt",
"chown root:root /etc/cups/ssl/ server.example.com.key chmod 600 /etc/cups/ssl/ server.example.com.key",
"restorecon -Rv /etc/cups/ssl/",
"<Location /> Encryption Required </Location>",
"systemctl restart cups",
"journalctl -u cups",
"groupadd cups-admins",
"usermod -a -G cups-admins <username>",
"SystemGroup sys root wheel cups-admins",
"systemctl restart cups",
"ipptool -tv ipp:// <ip_address_or_hostname> :631/ipp/print get-printer-attributes.test | grep -E \"ipp-versions-supported|document-format-supported|get-printer-attributes\" Get printer attributes using get-printer-attributes [PASS] ipp-versions-supported (1setOf keyword) = document-format-supported (1setOf mimeMediaType) =",
"ipptool -tv ipp:// <ip_address_or_hostname> :631/printers/ <queue_name> get-printer-attributes.test | grep -E \"ipp-versions-supported|document-format-supported|get-printer-attributes\" Get printer attributes using get-printer-attributes [PASS] ipp-versions-supported (1setOf keyword) = document-format-supported (1setOf mimeMediaType) =",
"lpadmin -p Demo-printer -E -v ipp://192.0.2.200/ipp/print -m everywhere",
"lpadmin -p Demo-printer -E -v ipp://192.0.2.201/printers/example-queue -m everywhere",
"lpadmin -p Demo-printer -E -v socket://192.0.2.200/ -P /root/example.ppd",
"lpadmin -p Demo-printer -E -v ipp://192.0.2.201/printers/example-queue -P /root/example.ppd",
"lpinfo -m drv:///sample.drv/generpcl.ppd Generic PCL Laser Printer",
"lpadmin -p Demo-printer -E -v socket://192.0.2.200/ -m drv:///sample.drv/generpcl.ppd",
"lpstat -p printer Demo-printer is idle. enabled since Fri 23 Jun 2023 09:36:40 AM CEST",
"lp -d Demo-printer /usr/share/cups/data/default-testpage.pdf",
"yum install samba-krb5-printing samba-client krb5-workstation",
"smbclient -L win_print_srv.ad.example.com -U administrator @ AD_KERBEROS_REALM --use-kerberos=required Sharename Type Comment --------- ---- ------- Example Printer Example",
"lpinfo -m samsung.ppd Samsung M267x 287x Series PXL",
"lpadmin -p \" example_printer \" -v smb:// win_print_srv.ad.example.com / Example -m samsung.ppd -o auth-info-required=negotiate -E",
"kinit domain_user_name @ AD_KERBEROS_REALM",
"lp -d example_printer file",
"BrowsePoll remote_cups_server.example.com BrowsePoll 192.0.2.100:1631",
"BrowseFilter name sales_",
"BrowseInterval 1200 BrowseTimeout 6000",
"systemctl enable --now cups-browsed",
"lpstat -v device for Demo-printer : implicitclass:// Demo-printer /",
"journalctl -u cups",
"journalctl -u cups JID= <print_job_id>",
"journalectl -u cups --since= <YYYY-MM-DD> --until= <YYYY-MM-DD>",
"AccessLog /var/log/cups/access_log ErrorLog /var/log/cups/error_log PageLog /var/log/cups/page_log",
"semanage fcontext -a -t cupsd_log_t \" /var/log/printing (/.*)?\" restorecon -Rv /var/log/printing/",
"systemctl restart cups",
"cat /var/log/cups/access_log cat /var/log/cups/error_log cat /var/log/cups/page_log",
"ls -ldZ /var/log/printing/ drwxr-xr-x. 2 lp sys unconfined_u:object_r: cupsd_log_t :s0 6 Jun 20 15:55 /var/log/printing/",
"BrowsePoll print_server_1.example.com:631 BrowsePoll print_server_2.example.com:631",
"systemctl enable --now cups cups-browsed",
"lpstat -t device for Demo-printer: implicitclass://Demo-printer/ Demo-printer accepting requests since Fri 22 Nov 2024 11:54:59 AM CET printer Demo-printer is idle. enabled since Fri 22 Nov 2024 11:54:59 AM CET"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/configuring-printing_deploying-different-types-of-servers |
5.3. Securing NIS | 5.3. Securing NIS NIS stands for Network Information Service . It is an RPC service, called ypserv , which is used in conjunction with portmap and other related services to distribute maps of usernames, passwords, and other sensitive information to any computer claiming to be within its domain. An NIS server is comprised of several applications. They include the following: /usr/sbin/rpc.yppasswdd - Also called the yppasswdd service, this daemon allows users to change their NIS passwords. /usr/sbin/rpc.ypxfrd - Also called the ypxfrd service, this daemon is responsible for NIS map transfers over the network. /usr/sbin/yppush - This application propagates changed NIS databases to multiple NIS servers. /usr/sbin/ypserv - This is the NIS server daemon. NIS is rather insecure by todays standards. It has no host authentication mechanisms and passes all of its information over the network unencrypted, including password hashes. As a result, extreme care must be taken to set up a network that uses NIS. Further complicating the situation, the default configuration of NIS is inherently insecure. It is recommended that anyone planning to implement an NIS server first secure the portmap service as outlined in Section 5.2, "Securing Portmap" , then address the following issues, such as network planning. 5.3.1. Carefully Plan the Network Because NIS passes sensitive information unencrypted over the network, it is important the service be run behind a firewall and on a segmented and secure network. Any time NIS information is passed over an insecure network, it risks being intercepted. Careful network design in these regards can help prevent severe security breaches. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-server-nis |
Chapter 5. Running Red Hat JBoss Data Grid with Apache Camel | Chapter 5. Running Red Hat JBoss Data Grid with Apache Camel Apache Camel is an open source integration and routing system that allows transference of messages from various sources to different destinations, providing an integration framework that allows interaction with various systems using the same API, regardless of the protocol or data type. Using Camel with Red Hat JBoss Data Grid and Red Hat JBoss Fuse simplifies integration in large enterprise applications by providing a wide variety of transports and APIs that add connectivity. JBoss Data Grid provides support for caching on Camel routes in JBoss Fuse, partially replacing Ehcache. JBoss Data Grid is supported as an embedded cache (local or clustered) or as a remote cache in a Camel route. Report a bug 5.1. The camel-jbossdatagrid Component Red Hat JBoss Data Grid's camel-jbossdatagrid component includes the following features: Local Camel Consumer Receives cache change notifications and sends them to be processed. This can be done synchronously or asynchronously, and is also supported with a replicated or distributed cache. Local Camel Producer A producer creates and sends messages to an endpoint. The camel-jbossdatagrid producer uses GET , PUT , REMOVE , and CLEAR operations. The local producer is also supported with a replicated or distributed cache. Remote Camel Producer In Remote Client-Server mode, the Camel producer can send messages using Hot Rod. Remote Camel Consumer In Client-Server mode, receives cache change notifications and sends them to be processed. The events are processed asynchronously. The following camel-jbossdatagrid dependency must be added to the pom.xml file to run JBoss Data Grid with Camel: Note The camel-jbossdatagrid component ships with JBoss Data Grid, and is not included in the JBoss Fuse 6.1 or JBoss Fuse Service Works 6.0 distributions. Camel components are the main extension point in Camel, and are associated with the name used in a URI, and act as a factory of endpoints. For example, a FileComponent is referred to in a URI as file , which creates FileEndpoints . URI Format The following URI format is used for camel-jbossdatagrid : URI Options The producer can create and send messages to a local or remote JBoss Data Grid cache configured in the registry. If a cacheContainer is present, the cache will be either local or remote, depending on whether the cacheContainer instance is a DefaultCacheManager or RemoteCacheManager . If it is not present, the cache will try to connect to remote cache using the supplied hostname/port. A consumer listens for events from the local JBoss Data Grid cache accessible from the registry. Table 5.1. URI Options Name Default Value Type Context Description cacheContainer null CacheContainer Shared Reference to a org.infinispan.manager.CacheContainer in the Registry. cacheName null String Shared The cache name to use. If not specified, the default cache is used. command PUT String Producer The operation to perform. Only the PUT, GET, REMOVE, and CLEAR values are currently supported. eventTypes null Set<String> Consumer A comma separated list of the event types to register. By default, this listens for all event types. Possible values are defined in org.infinispan.notifications.cachelistener.event.Event.Type . Example: sync true Boolean Consumer By default the consumer will receive notifications synchronously by the same thread that process the cache operation. Remote HotRod listeners support only asynchronous notification. clustered false Boolean Consumer By default the consumer will only receive local events. By using this option, the consumer also listens to events originated on other nodes in the cluster. The only events available for clustered listeners are CACHE_ENTRY_CREATED , CACHE_ENTRY_REMOVED , and CACHE_ENTRY_MODIFIED . Camel Operations A list of all available operations, along with their header information, is found below: Table 5.2. Put Operations Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationPut Embedded / Remote Puts a key/value pair in the cache, optionally with expiration CamelInfinispanKey , CamelInfinispanValue CamelInfinispanLifespanTime , CamelInfinispanLifespanTimeUnit , CamelInfinispanMaxIdleTime , CamelInfinispanMaxIdleTimeUnit , CamelInfinispanIgnoreReturnValues CamelInfinispanOperationResult CamelInfinispanOperationPutAsync Asynchronously puts a key/value pair in the cache, optionally with expiration CamelInfinispanOperationPutIfAbsent Puts a key/value pair in the cache if it did not exist, optionally with expiration CamelInfinispanOperationPutIfAbsentAsync Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration Table 5.3. Put All Operations Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationPutAll Embedded / Remote Adds multiple entries to a cache, optionally with expiration CamelInfinispanMap CamelInfinispanLifespanTime , CamelInfinispanLifespanTimeUnit , CamelInfinispanMaxIdleTime , CamelInfinispanMaxIdleTimeUnit CamelInfinispanOperationPutAllAsync Asynchronously adds multiple entries to a cache, optionally with expiration Table 5.4. Get Operation Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationGet Embedded / Remote Retrieves the value associated with a specific key from the cache CamelInfinispanKey Table 5.5. Contains Key Operation Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationContainsKey Embedded / Remote Determines whether a cache contains a specific key CamelInfinispanKey CamelInfinispanOperationResult Table 5.6. Contains Value Operation Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationContainsValue Embedded / Remote Determines whether a cache contains a specific value CamelInfinispanKey Table 5.7. Remove Operations Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationRemove Embedded / Remote Removes an entry from a cache, optionally only if the value matches a given one CamelInfinispanKey CamelInfinispanValue CamelInfinispanOperationResult CamelInfinispanOperationRemoveAsync Asynchronously removes an entry from a cache, optionally only if the value matches a given one Table 5.8. Replace Operations Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationReplace Embedded / Remote Conditionally replaces an entry in the cache, optionally with expiration CamelInfinispanKey , CamelInfinispanValue , CamelInfinispanOldValue CamelInfinispanLifespanTime , CamelInfinispanLifespanTimeUnit , CamelInfinispanMaxIdleTime , CamelInfinispanMaxIdleTimeUnit , CamelInfinispanIgnoreReturnValues CamelInfinispanOperationResult CamelInfinispanOperationReplaceAsync Asynchronously conditionally replaces an entry in the cache, optionally with expiration Table 5.9. Clear Operation Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationClear Embedded / Remote Clears the cache Table 5.10. Size Operation Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationSize Embedded / Remote Returns the number of entries in the cache CamelInfinispanOperationResult Table 5.11. Query Operation Operation Name Context Description Required Headers Optional Headers Result Header CamelInfinispanOperationQuery Remote Executes a query on the cache CamelInfinispanQueryBuilder CamelInfinispanOperationResult Note Any operations that take CamelInfinispanIgnoreReturnValues will receive a null result. Table 5.12. Message Headers Name Default Value Type Context Description CamelInfinispanCacheName null String Shared The cache participating in the operation or event. CamelInfinispanMap null Map Producer A Map to use in case of the CamelInfinispanOperationPutAll operation. CamelInfinispanKey null Object Shared The key to perform the operation to or the key generating the event. CamelInfinispanValue null Object Producer The value to use for the operation. CamelInfinispanOperationResult null Object Producer The result of the operation. CamelInfinispanEventType null String Consumer For local cache listeners (non-clustered), one of the following values: CACHE_ENTRY_ACTIVATED , CACHE_ENTRY_PASSIVATED , CACHE_ENTRY_VISITED , CACHE_ENTRY_LOADED , CACHE_ENTRY_EVICTED , CACHE_ENTRY_CREATED , CACHE_ENTRY_REMOVED , CACHE_ENTRY_MODIFIED For remote HotRod listeners, one of the following values: CLIENT_CACHE_ENTRY_CREATED , CLIENT_CACHE_ENTRY_MODIFIED , CLIENT_CACHE_ENTRY_REMOVED , CLIENT_CACHE_FAILOVER . CamelInfinispanIsPre null Boolean Consumer Infinispan fires two events for each operation when local non-clustered listener is used: one before and one after the operation. For clustered listeners and remote HotRod listeners, Infinispan fires only one event after the operation. CamelInfinispanQueryBuilder null InfinispanQueryBuilder Producer An instance of InfinispanQueryBuilder that, in its build() , defines the query to be executed on the cache. CamelInfinispanLifespanTime null long Producer The Lifespan time of a value inside the cache. Negative values are interpreted as infinity. CamelInfinispanTimeUnit null String Producer The Time Unit of an entry Lifespan Time. CamelInfinispanMaxIdleTime null long Producer The maximum amount of time an entry is allowed to be idle for before it is considered as expired. CamelInfinispanMaxIdleTimeUnit null String Producer The Time Unit of an entry Max Idle Time. Report a bug | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jbossdatagrid</artifactId> <version>6.6.1.Final-redhat-1</version> <!-- use the same version as your JBoss Data Grid version --> </dependency>",
"infinispan://hostname?[options]",
"...?eventTypes=CACHE_ENTRY_EXPIRED,CACHE_ENTRY_EVICTED,"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-Running_Red_Hat_JBoss_Data_Grid_with_Apache_Camel |
Chapter 9. Configuring and maintaining a Dovecot IMAP and POP3 server | Chapter 9. Configuring and maintaining a Dovecot IMAP and POP3 server Dovecot is a high-performance mail delivery agent (MDA) with a focus on security. You can use IMAP or POP3-compatible email clients to connect to a Dovecot server and read or download emails. Key features of Dovecot: The design and implementation focuses on security Two-way replication support for high availability to improve the performance in large environments Supports the high-performance dbox mailbox format, but also mbox and Maildir for compatibility reasons Self-healing features, such as fixing broken index files Compliance with the IMAP standards Workaround support to bypass bugs in IMAP and POP3 clients 9.1. Setting up a Dovecot server with PAM authentication Dovecot supports the Name Service Switch (NSS) interface as a user database and the Pluggable Authentication Modules (PAM) framework as an authentication backend. With this configuration, Dovecot can provide services to users who are available locally on the server through NSS. Use PAM authentication if accounts: Are defined locally in the /etc/passwd file Are stored in a remote database but they are available locally through the System Security Services Daemon (SSSD) or other NSS plugins. 9.1.1. Installing Dovecot The dovecot package provides: The dovecot service and the utilities to maintain it Services that Dovecot starts on demand, such as for authentication Plugins, such as server-side mail filtering Configuration files in the /etc/dovecot/ directory Documentation in the /usr/share/doc/dovecot/ directory Procedure Install the dovecot package: Note If Dovecot is already installed and you require clean configuration files, rename or remove the /etc/dovecot/ directory. Afterwards, reinstall the package. Without removing the configuration files, the yum reinstall dovecot command does not reset the configuration files in /etc/dovecot/ . step Configuring TLS encryption on a Dovecot server . 9.1.2. Configuring TLS encryption on a Dovecot server Dovecot provides a secure default configuration. For example, TLS is enabled by default to transmit credentials and data encrypted over networks. To configure TLS on a Dovecot server, you only need to set the paths to the certificate and private key files. Additionally, you can increase the security of TLS connections by generating and using Diffie-Hellman parameters to provide perfect forward secrecy (PFS). Prerequisites Dovecot is installed. The following files have been copied to the listed locations on the server: The server certificate: /etc/pki/dovecot/certs/server.example.com.crt The private key: /etc/pki/dovecot/private/server.example.com.key The Certificate Authority (CA) certificate: /etc/pki/dovecot/certs/ca.crt The hostname in the Subject DN field of the server certificate matches the server's Fully-qualified Domain Name (FQDN). Procedure Set secure permissions on the private key file: Generate a file with Diffie-Hellman parameters: Depending on the hardware and entropy on the server, generating Diffie-Hellman parameters with 4096 bits can take several minutes. Set the paths to the certificate and private key files in the /etc/dovecot/conf.d/10-ssl.conf file: Update the ssl_cert and ssl_key parameters, and set them to use the paths of the server's certificate and private key: Uncomment the ssl_ca parameter, and set it to use the path to the CA certificate: Uncomment the ssl_dh parameter, and set it to use the path to the Diffie-Hellman parameters file: Important To ensure that Dovecot reads the value of a parameter from a file, the path must start with a leading < character. step Preparing Dovecot to use virtual users Additional resources /usr/share/doc/dovecot/wiki/SSL.DovecotConfiguration.txt 9.1.3. Preparing Dovecot to use virtual users By default, Dovecot performs many actions on the file system as the user who uses the service. However, configuring the Dovecot back end to use one local user to perform these actions has several benefits: Dovecot performs file system actions as a specific local user instead of using the user's ID (UID). Users do not need to be available locally on the server. You can store all mailboxes and user-specific files in one root directory. Users do not require a UID and group ID (GID), which reduces administration efforts. Users who have access to the file system on the server cannot compromise their mailboxes or indexes because they cannot access these files. Setting up replication is easier. Prerequisites Dovecot is installed. Procedure Create the vmail user: Dovecot will later use this user to manage the mailboxes. For security reasons, do not use the dovecot or dovenull system users for this purpose. If you use a different path than /var/mail/ , set the mail_spool_t SELinux context on it, for example: Grant write permissions on /var/mail/ only to the vmail user: Uncomment the mail_location parameter in the /etc/dovecot/conf.d/10-mail.conf file, and set it to the mailbox format and location: With this setting: Dovecot uses the high-performant dbox mailbox format in single mode. In this mode, the service stores each mail in a separate file, similar to the maildir format. Dovecot resolves the %n variable in the path to the username. This is required to ensure that each user has a separate directory for its mailbox. step Using PAM as the Dovecot authentication backend . Additional resources /usr/share/doc/dovecot/wiki/VirtualUsers.txt /usr/share/doc/dovecot/wiki/MailLocation.txt /usr/share/doc/dovecot/wiki/MailboxFormat.dbox.txt /usr/share/doc/dovecot/wiki/Variables.txt 9.1.4. Using PAM as the Dovecot authentication backend By default, Dovecot uses the Name Service Switch (NSS) interface as the user database and the Pluggable Authentication Modules (PAM) framework as the authentication backend. Customize the settings to adapt Dovecot to your environment and to simplify administration by using the virtual users feature. Prerequisites Dovecot is installed. The virtual users feature is configured. Procedure Update the first_valid_uid parameter in the /etc/dovecot/conf.d/10-mail.conf file to define the lowest user ID (UID) that can authenticate to Dovecot: By default, users with a UID greater than or equal to 1000 can authenticate. If required, you can also set the last_valid_uid parameter to define the highest UID that Dovecot allows to log in. In the /etc/dovecot/conf.d/auth-system.conf.ext file, add the override_fields parameter to the userdb section as follows: Due to the fixed values, Dovecot does not query these settings from the /etc/passwd file. As a result, the home directory defined in /etc/passwd does not need to exist. step Complete the Dovecot configuration . Additional resources /usr/share/doc/dovecot/wiki/PasswordDatabase.PAM.txt /usr/share/doc/dovecot/wiki/VirtualUsers.Home.txt 9.1.5. Completing the Dovecot configuration Once you have installed and configured Dovecot, open the required ports in the firewalld service, and enable and start the service. Afterwards, you can test the server. Prerequisites The following has been configured in Dovecot: TLS encryption An authentication backend Clients trust the Certificate Authority (CA) certificate. Procedure If you want to provide only an IMAP or POP3 service to users, uncomment the protocols parameter in the /etc/dovecot/dovecot.conf file, and set it to the required protocols. For example, if you do not require POP3, set: By default, the imap , pop3 , and lmtp protocols are enabled. Open the ports in the local firewall. For example, to open the ports for the IMAPS, IMAP, POP3S, and POP3 protocols, enter: Enable and start the dovecot service: Verification Use a mail client, such as Mozilla Thunderbird, to connect to Dovecot and read emails. The settings for the mail client depend on the protocol you want to use: Table 9.1. Connection settings to the Dovecot server Protocol Port Connection security Authentication method IMAP 143 STARTTLS PLAIN [a] IMAPS 993 SSL/TLS PLAIN [a] POP3 110 STARTTLS PLAIN [a] POP3S 995 SSL/TLS PLAIN [a] [a] The client transmits data encrypted through the TLS connection. Consequently, credentials are not disclosed. Note that this table does not list settings for unencrypted connections because, by default, Dovecot does not accept plain text authentication on connections without TLS. Display configuration settings with non-default values: Additional resources firewall-cmd(1) man page on your system 9.2. Setting up a Dovecot server with LDAP authentication If your infrastructure uses an LDAP server to store accounts, you can authenticate Dovecot users against it. In this case, you manage accounts centrally in the directory and, users do not required local access to the file system on the Dovecot server. Centrally-managed accounts are also a benefit if you plan to set up multiple Dovecot servers with replication to make your mailboxes high available. 9.2.1. Installing Dovecot The dovecot package provides: The dovecot service and the utilities to maintain it Services that Dovecot starts on demand, such as for authentication Plugins, such as server-side mail filtering Configuration files in the /etc/dovecot/ directory Documentation in the /usr/share/doc/dovecot/ directory Procedure Install the dovecot package: Note If Dovecot is already installed and you require clean configuration files, rename or remove the /etc/dovecot/ directory. Afterwards, reinstall the package. Without removing the configuration files, the yum reinstall dovecot command does not reset the configuration files in /etc/dovecot/ . step Configuring TLS encryption on a Dovecot server . 9.2.2. Configuring TLS encryption on a Dovecot server Dovecot provides a secure default configuration. For example, TLS is enabled by default to transmit credentials and data encrypted over networks. To configure TLS on a Dovecot server, you only need to set the paths to the certificate and private key files. Additionally, you can increase the security of TLS connections by generating and using Diffie-Hellman parameters to provide perfect forward secrecy (PFS). Prerequisites Dovecot is installed. The following files have been copied to the listed locations on the server: The server certificate: /etc/pki/dovecot/certs/server.example.com.crt The private key: /etc/pki/dovecot/private/server.example.com.key The Certificate Authority (CA) certificate: /etc/pki/dovecot/certs/ca.crt The hostname in the Subject DN field of the server certificate matches the server's Fully-qualified Domain Name (FQDN). Procedure Set secure permissions on the private key file: Generate a file with Diffie-Hellman parameters: Depending on the hardware and entropy on the server, generating Diffie-Hellman parameters with 4096 bits can take several minutes. Set the paths to the certificate and private key files in the /etc/dovecot/conf.d/10-ssl.conf file: Update the ssl_cert and ssl_key parameters, and set them to use the paths of the server's certificate and private key: Uncomment the ssl_ca parameter, and set it to use the path to the CA certificate: Uncomment the ssl_dh parameter, and set it to use the path to the Diffie-Hellman parameters file: Important To ensure that Dovecot reads the value of a parameter from a file, the path must start with a leading < character. step Preparing Dovecot to use virtual users Additional resources /usr/share/doc/dovecot/wiki/SSL.DovecotConfiguration.txt 9.2.3. Preparing Dovecot to use virtual users By default, Dovecot performs many actions on the file system as the user who uses the service. However, configuring the Dovecot back end to use one local user to perform these actions has several benefits: Dovecot performs file system actions as a specific local user instead of using the user's ID (UID). Users do not need to be available locally on the server. You can store all mailboxes and user-specific files in one root directory. Users do not require a UID and group ID (GID), which reduces administration efforts. Users who have access to the file system on the server cannot compromise their mailboxes or indexes because they cannot access these files. Setting up replication is easier. Prerequisites Dovecot is installed. Procedure Create the vmail user: Dovecot will later use this user to manage the mailboxes. For security reasons, do not use the dovecot or dovenull system users for this purpose. If you use a different path than /var/mail/ , set the mail_spool_t SELinux context on it, for example: Grant write permissions on /var/mail/ only to the vmail user: Uncomment the mail_location parameter in the /etc/dovecot/conf.d/10-mail.conf file, and set it to the mailbox format and location: With this setting: Dovecot uses the high-performant dbox mailbox format in single mode. In this mode, the service stores each mail in a separate file, similar to the maildir format. Dovecot resolves the %n variable in the path to the username. This is required to ensure that each user has a separate directory for its mailbox. step Using LDAP as the Dovecot authentication backend . Additional resources /usr/share/doc/dovecot/wiki/VirtualUsers.txt /usr/share/doc/dovecot/wiki/MailLocation.txt /usr/share/doc/dovecot/wiki/MailboxFormat.dbox.txt /usr/share/doc/dovecot/wiki/Variables.txt 9.2.4. Using LDAP as the Dovecot authentication backend Users in an LDAP directory can usually authenticate themselves to the directory service. Dovecot can use this to authenticate users when they log in to the IMAP and POP3 services. This authentication method has a number of benefits, such as: Administrators can manage users centrally in the directory. The LDAP accounts do not require any special attributes. They only need to be able to authenticate to the LDAP server. Consequently, this method is independent from the password storage scheme used on the LDAP server. Users do not need to be available locally on the server through the Name Service Switch (NSS) interface and the Pluggable Authentication Modules (PAM) framework. Prerequisites Dovecot is installed. The virtual users feature is configured. Connections to the LDAP server support TLS encryption. RHEL on the Dovecot server trusts the Certificate Authority (CA) certificate of the LDAP server. If users are stored in different trees in the LDAP directory, a dedicated LDAP account for Dovecot exists to search the directory. This account requires permissions to search for Distinguished Names (DNs) of other users. Procedure Configure the authentication backends in the /etc/dovecot/conf.d/10-auth.conf file: Comment out include statements for auth-*.conf.ext authentication backend configuration files that you do not require, for example: Enable LDAP authentication by uncommenting the following line: Edit the /etc/dovecot/conf.d/auth-ldap.conf.ext file, and add the override_fields parameter as follows to the userdb section: Due to the fixed values, Dovecot does not query these settings from the LDAP server. Consequently, these attributes also do not have to be present. Create the /etc/dovecot/dovecot-ldap.conf.ext file with the following settings: Depending on the LDAP structure, configure one of the following: If users are stored in different trees in the LDAP directory, configure dynamic DN lookups: Dovecot uses the specified DN, password, and filter to search the DN of the authenticating user in the directory. In this search, Dovecot replaces %n in the filter with the username. Note that the LDAP search must return only one result. If all users are stored under a specific entry, configure a DN template: Enable authentication binds to the LDAP server to verify Dovecot users: Set the URL to the LDAP server: For security reasons, only use encrypted connections using LDAPS or the STARTTLS command over the LDAP protocol. For the latter, additionally add tls = yes to the settings. For a working certificate validation, the hostname of the LDAP server must match the hostname used in its TLS certificate. Enable the verification of the LDAP server's TLS certificate: Set the base DN to the DN where to start searching for users: Set the search scope: Dovecot searches with the onelevel scope only in the specified base DN and with the subtree scope also in subtrees. Set secure permissions on the /etc/dovecot/dovecot-ldap.conf.ext file: step Complete the Dovecot configuration . Additional resources /usr/share/doc/dovecot/example-config/dovecot-ldap.conf.ext /usr/share/doc/dovecot/wiki/UserDatabase.Static.txt /usr/share/doc/dovecot/wiki/AuthDatabase.LDAP.txt /usr/share/doc/dovecot/wiki/AuthDatabase.LDAP.AuthBinds.txt /usr/share/doc/dovecot/wiki/AuthDatabase.LDAP.PasswordLookups.txt 9.2.5. Completing the Dovecot configuration Once you have installed and configured Dovecot, open the required ports in the firewalld service, and enable and start the service. Afterwards, you can test the server. Prerequisites The following has been configured in Dovecot: TLS encryption An authentication backend Clients trust the Certificate Authority (CA) certificate. Procedure If you want to provide only an IMAP or POP3 service to users, uncomment the protocols parameter in the /etc/dovecot/dovecot.conf file, and set it to the required protocols. For example, if you do not require POP3, set: By default, the imap , pop3 , and lmtp protocols are enabled. Open the ports in the local firewall. For example, to open the ports for the IMAPS, IMAP, POP3S, and POP3 protocols, enter: Enable and start the dovecot service: Verification Use a mail client, such as Mozilla Thunderbird, to connect to Dovecot and read emails. The settings for the mail client depend on the protocol you want to use: Table 9.2. Connection settings to the Dovecot server Protocol Port Connection security Authentication method IMAP 143 STARTTLS PLAIN [a] IMAPS 993 SSL/TLS PLAIN [a] POP3 110 STARTTLS PLAIN [a] POP3S 995 SSL/TLS PLAIN [a] [a] The client transmits data encrypted through the TLS connection. Consequently, credentials are not disclosed. Note that this table does not list settings for unencrypted connections because, by default, Dovecot does not accept plain text authentication on connections without TLS. Display configuration settings with non-default values: Additional resources firewall-cmd(1) man page on your system 9.3. Setting up a Dovecot server with MariaDB SQL authentication If you store users and passwords in a MariaDB SQL server, you can configure Dovecot to use it as the user database and authentication backend. With this configuration, you manage accounts centrally in a database, and users have no local access to the file system on the Dovecot server. Centrally managed accounts are also a benefit if you plan to set up multiple Dovecot servers with replication to make your mailboxes highly available. 9.3.1. Installing Dovecot The dovecot package provides: The dovecot service and the utilities to maintain it Services that Dovecot starts on demand, such as for authentication Plugins, such as server-side mail filtering Configuration files in the /etc/dovecot/ directory Documentation in the /usr/share/doc/dovecot/ directory Procedure Install the dovecot package: Note If Dovecot is already installed and you require clean configuration files, rename or remove the /etc/dovecot/ directory. Afterwards, reinstall the package. Without removing the configuration files, the yum reinstall dovecot command does not reset the configuration files in /etc/dovecot/ . step Configuring TLS encryption on a Dovecot server . 9.3.2. Configuring TLS encryption on a Dovecot server Dovecot provides a secure default configuration. For example, TLS is enabled by default to transmit credentials and data encrypted over networks. To configure TLS on a Dovecot server, you only need to set the paths to the certificate and private key files. Additionally, you can increase the security of TLS connections by generating and using Diffie-Hellman parameters to provide perfect forward secrecy (PFS). Prerequisites Dovecot is installed. The following files have been copied to the listed locations on the server: The server certificate: /etc/pki/dovecot/certs/server.example.com.crt The private key: /etc/pki/dovecot/private/server.example.com.key The Certificate Authority (CA) certificate: /etc/pki/dovecot/certs/ca.crt The hostname in the Subject DN field of the server certificate matches the server's Fully-qualified Domain Name (FQDN). Procedure Set secure permissions on the private key file: Generate a file with Diffie-Hellman parameters: Depending on the hardware and entropy on the server, generating Diffie-Hellman parameters with 4096 bits can take several minutes. Set the paths to the certificate and private key files in the /etc/dovecot/conf.d/10-ssl.conf file: Update the ssl_cert and ssl_key parameters, and set them to use the paths of the server's certificate and private key: Uncomment the ssl_ca parameter, and set it to use the path to the CA certificate: Uncomment the ssl_dh parameter, and set it to use the path to the Diffie-Hellman parameters file: Important To ensure that Dovecot reads the value of a parameter from a file, the path must start with a leading < character. step Preparing Dovecot to use virtual users Additional resources /usr/share/doc/dovecot/wiki/SSL.DovecotConfiguration.txt 9.3.3. Preparing Dovecot to use virtual users By default, Dovecot performs many actions on the file system as the user who uses the service. However, configuring the Dovecot back end to use one local user to perform these actions has several benefits: Dovecot performs file system actions as a specific local user instead of using the user's ID (UID). Users do not need to be available locally on the server. You can store all mailboxes and user-specific files in one root directory. Users do not require a UID and group ID (GID), which reduces administration efforts. Users who have access to the file system on the server cannot compromise their mailboxes or indexes because they cannot access these files. Setting up replication is easier. Prerequisites Dovecot is installed. Procedure Create the vmail user: Dovecot will later use this user to manage the mailboxes. For security reasons, do not use the dovecot or dovenull system users for this purpose. If you use a different path than /var/mail/ , set the mail_spool_t SELinux context on it, for example: Grant write permissions on /var/mail/ only to the vmail user: Uncomment the mail_location parameter in the /etc/dovecot/conf.d/10-mail.conf file, and set it to the mailbox format and location: With this setting: Dovecot uses the high-performant dbox mailbox format in single mode. In this mode, the service stores each mail in a separate file, similar to the maildir format. Dovecot resolves the %n variable in the path to the username. This is required to ensure that each user has a separate directory for its mailbox. step Using a MariaDB SQL database as the Dovecot authentication backend Additional resources /usr/share/doc/dovecot/wiki/VirtualUsers.txt /usr/share/doc/dovecot/wiki/MailLocation.txt /usr/share/doc/dovecot/wiki/MailboxFormat.dbox.txt /usr/share/doc/dovecot/wiki/Variables.txt 9.3.4. Using a MariaDB SQL database as the Dovecot authentication backend Dovecot can read accounts and passwords from a MariaDB database and use it to authenticate users when they log in to the IMAP or POP3 service. The benefits of this authentication method include: Administrators can manage users centrally in a database. Users have no access locally on the server. Prerequisites Dovecot is installed. The virtual users feature is configured. Connections to the MariaDB server support TLS encryption. The dovecotDB database exists in MariaDB, and the users table contains at least a username and password column. The password column contains passwords encrypted with a scheme that Dovecot supports. The passwords either use the same scheme or have a { pw-storage-scheme } prefix. The dovecot MariaDB user has read permission on the users table in the dovecotDB database. The certificate of the Certificate Authority (CA) that issued the MariaDB server's TLS certificate is stored on the Dovecot server in the /etc/pki/tls/certs/ca.crt file. Procedure Install the dovecot-mysql package: Configure the authentication backends in the /etc/dovecot/conf.d/10-auth.conf file: Comment out include statements for auth-*.conf.ext authentication backend configuration files that you do not require, for example: Enable SQL authentication by uncommenting the following line: Edit the /etc/dovecot/conf.d/auth-sql.conf.ext file, and add the override_fields parameter to the userdb section as follows: Due to the fixed values, Dovecot does not query these settings from the SQL server. Create the /etc/dovecot/dovecot-sql.conf.ext file with the following settings: To use TLS encryption to the database server, set the ssl_ca option to the path of the certificate of the CA that issued the MariaDB server certificate. For a working certificate validation, the hostname of the MariaDB server must match the hostname used in its TLS certificate. If the password values in the database contain a { pw-storage-scheme } prefix, you can omit the default_pass_scheme setting. The queries in the file must be set as follows: For the user_query parameter, the query must return the username of the Dovecot user. The query must also return only one result. For the password_query parameter, the query must return the username and the password, and Dovecot must use these values in the user and password variables. Therefore, if the database uses different column names, use the AS SQL command to rename a column in the result. For the iterate_query parameter, the query must return a list of all users. Set secure permissions on the /etc/dovecot/dovecot-sql.conf.ext file: step Complete the Dovecot configuration . Additional resources /usr/share/doc/dovecot/example-config/dovecot-sql.conf.ext /usr/share/doc/dovecot/wiki/Authentication.PasswordSchemes.txt 9.3.5. Completing the Dovecot configuration Once you have installed and configured Dovecot, open the required ports in the firewalld service, and enable and start the service. Afterwards, you can test the server. Prerequisites The following has been configured in Dovecot: TLS encryption An authentication backend Clients trust the Certificate Authority (CA) certificate. Procedure If you want to provide only an IMAP or POP3 service to users, uncomment the protocols parameter in the /etc/dovecot/dovecot.conf file, and set it to the required protocols. For example, if you do not require POP3, set: By default, the imap , pop3 , and lmtp protocols are enabled. Open the ports in the local firewall. For example, to open the ports for the IMAPS, IMAP, POP3S, and POP3 protocols, enter: Enable and start the dovecot service: Verification Use a mail client, such as Mozilla Thunderbird, to connect to Dovecot and read emails. The settings for the mail client depend on the protocol you want to use: Table 9.3. Connection settings to the Dovecot server Protocol Port Connection security Authentication method IMAP 143 STARTTLS PLAIN [a] IMAPS 993 SSL/TLS PLAIN [a] POP3 110 STARTTLS PLAIN [a] POP3S 995 SSL/TLS PLAIN [a] [a] The client transmits data encrypted through the TLS connection. Consequently, credentials are not disclosed. Note that this table does not list settings for unencrypted connections because, by default, Dovecot does not accept plain text authentication on connections without TLS. Display configuration settings with non-default values: Additional resources firewall-cmd(1) man page on your system 9.4. Configuring replication between two Dovecot servers With two-way replication, you can make your Dovecot server high-available, and IMAP and POP3 clients can access a mailbox on both servers. Dovecot keeps track of changes in the index logs of each mailbox and solves conflicts in a safe way. Perform this procedure on both replication partners. Note Replication works only between server pairs. Consequently, in a large cluster, you need multiple independent backend pairs. Prerequisites Both servers use the same authentication backend. Preferably, use LDAP or SQL to maintain accounts centrally. The Dovecot user database configuration supports user listing. Use the doveadm user '*' command to verify this. Dovecot accesses mailboxes on the file system as the vmail user instead of the user's ID (UID). Procedure Create the /etc/dovecot/conf.d/10-replication.conf file and perform the following steps in it: Enable the notify and replication plug-ins: Add a service replicator section: With these settings, Dovecot starts at least one replicator process when the dovecot service starts. Additionally, this section defines the settings on the replicator-doveadm socket. Add a service aggregator section to configure the replication-notify-fifo pipe and replication-notify socket: Add a service doveadm section to define the port of the replication service: Set the password of the doveadm replication service: The password must be the same on both servers. Configure the replication partner: Optional: Define the maximum number of parallel dsync processes: The default value of replication_max_conns is 10 . Set secure permissions on the /etc/dovecot/conf.d/10-replication.conf file: Enable the nis_enabled SELinux Boolean to allow Dovecot to open the doveadm replication port: Configure firewalld rules to allow only the replication partner to access the replication port, for example: The subnet masks /32 for the IPv4 and /128 for the IPv6 address limit the access to the specified addresses. Perform this procedure also on the other replication partner. Reload Dovecot: Verification Perform an action in a mailbox on one server and then verify if Dovecot has replicated the change to the other server. Display the replicator status: Display the replicator status of a specific user: Additional resources dsync(1) man page on your system /usr/share/doc/dovecot/wiki/Replication.txt 9.5. Automatically subscribing users to IMAP mailboxes Typically, IMAP server administrators want Dovecot to automatically create certain mailboxes, such as Sent and Trash , and subscribe the users to them. You can set this in the configuration files. Additionally, you can define special-use mailboxes . IMAP clients often support defining mailboxes for special purposes, such as for sent emails. To avoid that the user has to manually select and set the correct mailboxes, IMAP servers can send a special-use attribute in the IMAP LIST command. Clients can then use this attribute to identify and set, for example, the mailbox for sent emails. Prerequisites Dovecot is configured. Procedure Update the inbox namespace section in the /etc/dovecot/conf.d/15-mailboxes.conf file: Add the auto = subscribe setting to each special-use mailbox that should be available to users, for example: If your mail clients support more special-use mailboxes, you can add similar entries. The special_use parameter defines the value that Dovecot sends in the special-use attribute to the clients. Optional: If you want to define other mailboxes that have no special purpose, add mailbox sections for them in the user's inbox, for example: You can set the auto parameter to one of the following values: subscribe : Automatically creates the mailbox and subscribes the user to it. create : Automatically creates the mailbox without subscribing the user to it. no (default): Dovecot neither creates the mailbox nor does it subscribe the user to it. Reload Dovecot: Verification Use an IMAP client and access your mailbox. Mailboxes with the setting auto = subscribe are automatically visible. If the client supports special-use mailboxes and the defined purposes, the client automatically uses them. Additional resources RFC 6154: IMAP LIST Extension for Special-Use Mailboxes /usr/share/doc/dovecot/wiki/MailboxSettings.txt 9.6. Configuring an LMTP socket and LMTPS listener SMTP servers, such as Postfix, use the Local Mail Transfer Protocol (LMTP) to deliver emails to Dovecot. If the SMTP server runs: On the same host as Dovecot, use an LMTP socket On a different host, use an LMTP service By default, the LMTP protocol is not encrypted. However, if you configured TLS encryption, Dovecot uses the same settings automatically for the LMTP service. SMTP servers can then connect to it using the LMTPS protocol or the STARTTLS command over LMTP. Prerequisites Dovecot is installed. If you want to configure an LMTP service, TLS encryption is configured in Dovecot. Procedure Verify that the LMTP protocol is enabled: The protocol is enabled, if the output contains lmtp . If the lmtp protocol is disabled, edit the /etc/dovecot/dovecot.conf file, and append lmtp to the values in the protocols parameter: Depending on whether you need an LMTP socket or service, make the following changes in the service lmtp section in the /etc/dovecot/conf.d/10-master.conf file: LMTP socket: By default, Dovecot automatically creates the /var/run/dovecot/lmtp socket. Optional: Customize the ownership and permissions: LMTP service: Add a inet_listener sub-section: Configure firewalld rules to allow only the SMTP server to access the LMTP port, for example: The subnet masks /32 for the IPv4 and /128 for the IPv6 address limit the access to the specified addresses. Reload Dovecot: Verification If you configured the LMTP socket, verify that Dovecot has created the socket and that the permissions are correct: Configure the SMTP server to submit emails to Dovecot using the LMTP socket or service. When you use the LMTP service, ensure that the SMTP server uses the LMTPS protocol or sends the STARTTLS command to use an encrypted connection. Additional resources /usr/share/doc/dovecot/wiki/LMTP.txt 9.7. Disabling the IMAP or POP3 service in Dovecot By default, Dovecot provides IMAP and POP3 services. If you require only one of them, you can disable the other to reduce the surface for attack. Prerequisites Dovecot is installed. Procedure Uncomment the protocols parameter in the /etc/dovecot/dovecot.conf file, and set it to use the required protocols. For example, if you do not require POP3, set: By default, the imap , pop3 , and lmtp protocols are enabled. Reload Dovecot: Close the ports that are no longer required in the local firewall. For example, to close the ports for the POP3S and POP3 protocols, enter: Verification Display all ports in LISTEN mode opened by the dovecot process: In this example, Dovecot listens only on the TCP ports 993 (IMAPS) and 143 (IMAP). Note that Dovecot only opens a port for the LMTP protocol if you configure the service to listen on a port instead of using a socket. Additional resources firewall-cmd(1) man page on your system 9.8. Enabling server-side email filtering using Sieve on a Dovecot IMAP server You can upload Sieve scripts to a server using the ManageSieve protocol. Sieve scripts define rules and actions that a server should validate and perform on incoming emails. For example, users can use Sieve to forward emails from a specific sender, and administrators can create a global filter to move mails flagged by a spam filter into a separate IMAP folder. The ManageSieve plugin adds support for Sieve scripts and the ManageSieve protocol to a Dovecot IMAP server. Warning Use only clients that support using the ManageSieve protocol over TLS connections. Disabling TLS for this protocol causes clients to send credentials in plain text over the network. Prerequisites Dovecot is configured and provides IMAP mailboxes. TLS encryption is configured in Dovecot. The mail clients support the ManageSieve protocol over TLS connections. Procedure Install the dovecot-pigeonhole package: Uncomment the following line in /etc/dovecot/conf.d/20-managesieve.conf to enable the sieve protocol: This setting activates Sieve in addition to the other protocols that are already enabled. Open the ManageSieve port in firewalld : Reload Dovecot: Verification Use a client and upload a Sieve script. Use the following connection settings: Port: 4190 Connection security: SSL/TLS Authentication method: PLAIN Send an email to the user who has the Sieve script uploaded. If the email matches the rules in the script, verify that the server performs the defined actions. Additional resources /usr/share/doc/dovecot/wiki/Pigeonhole.Sieve.Plugins.IMAPSieve.txt /usr/share/doc/dovecot/wiki/Pigeonhole.Sieve.Troubleshooting.txt firewall-cmd(1) man page on your system 9.9. How Dovecot processes configuration files The dovecot package provides the main configuration file /etc/dovecot/dovecot.conf and multiple configuration files in the /etc/dovecot/conf.d/ directory. Dovecot combines the files to build the configuration when you start the service. The main benefit of multiple config files is to group settings and increase readability. If you prefer a single configuration file, you can instead maintain all settings in /etc/dovecot/dovecot.conf and remove all include and include_try statements from that file. Additional resources /usr/share/doc/dovecot/wiki/ConfigFile.txt /usr/share/doc/dovecot/wiki/Variables.txt | [
"yum install dovecot",
"chown root:root /etc/pki/dovecot/private/server.example.com.key chmod 600 /etc/pki/dovecot/private/server.example.com.key",
"openssl dhparam -out /etc/dovecot/dh.pem 4096",
"ssl_cert = < /etc/pki/dovecot/certs/server.example.com.crt ssl_key = < /etc/pki/dovecot/private/server.example.com.key",
"ssl_ca = < /etc/pki/dovecot/certs/ca.crt",
"ssl_dh = < /etc/dovecot/dh.pem",
"useradd --home-dir /var/mail/ --shell /usr/sbin/nologin vmail",
"semanage fcontext -a -t mail_spool_t \" <path> (/.*)?\" restorecon -Rv <path>",
"chown vmail:vmail /var/mail/ chmod 700 /var/mail/",
"mail_location = sdbox : /var/mail/%n/",
"first_valid_uid = 1000",
"userdb { driver = passwd override_fields = uid= vmail gid= vmail home= /var/mail/%n/ }",
"protocols = imap lmtp",
"firewall-cmd --permanent --add-service=imaps --add-service=imap --add-service=pop3s --add-service=pop3 firewall-cmd --reload",
"systemctl enable --now dovecot",
"doveconf -n",
"yum install dovecot",
"chown root:root /etc/pki/dovecot/private/server.example.com.key chmod 600 /etc/pki/dovecot/private/server.example.com.key",
"openssl dhparam -out /etc/dovecot/dh.pem 4096",
"ssl_cert = < /etc/pki/dovecot/certs/server.example.com.crt ssl_key = < /etc/pki/dovecot/private/server.example.com.key",
"ssl_ca = < /etc/pki/dovecot/certs/ca.crt",
"ssl_dh = < /etc/dovecot/dh.pem",
"useradd --home-dir /var/mail/ --shell /usr/sbin/nologin vmail",
"semanage fcontext -a -t mail_spool_t \" <path> (/.*)?\" restorecon -Rv <path>",
"chown vmail:vmail /var/mail/ chmod 700 /var/mail/",
"mail_location = sdbox : /var/mail/%n/",
"#!include auth-system.conf.ext",
"!include auth-ldap.conf.ext",
"userdb { driver = ldap args = /etc/dovecot/dovecot-ldap.conf.ext override_fields = uid= vmail gid= vmail home= /var/mail/%n/ }",
"dn = cn= dovecot_LDAP ,dc=example,dc=com dnpass = password pass_filter = (&(objectClass=posixAccount)(uid=%n))",
"auth_bind_userdn = cn=%n,ou=People,dc=example,dc=com",
"auth_bind = yes",
"uris = ldaps://LDAP-srv.example.com",
"tls_require_cert = hard",
"base = ou=People,dc=example,dc=com",
"scope = onelevel",
"chown root:root /etc/dovecot/dovecot-ldap.conf.ext chmod 600 /etc/dovecot/dovecot-ldap.conf.ext",
"protocols = imap lmtp",
"firewall-cmd --permanent --add-service=imaps --add-service=imap --add-service=pop3s --add-service=pop3 firewall-cmd --reload",
"systemctl enable --now dovecot",
"doveconf -n",
"yum install dovecot",
"chown root:root /etc/pki/dovecot/private/server.example.com.key chmod 600 /etc/pki/dovecot/private/server.example.com.key",
"openssl dhparam -out /etc/dovecot/dh.pem 4096",
"ssl_cert = < /etc/pki/dovecot/certs/server.example.com.crt ssl_key = < /etc/pki/dovecot/private/server.example.com.key",
"ssl_ca = < /etc/pki/dovecot/certs/ca.crt",
"ssl_dh = < /etc/dovecot/dh.pem",
"useradd --home-dir /var/mail/ --shell /usr/sbin/nologin vmail",
"semanage fcontext -a -t mail_spool_t \" <path> (/.*)?\" restorecon -Rv <path>",
"chown vmail:vmail /var/mail/ chmod 700 /var/mail/",
"mail_location = sdbox : /var/mail/%n/",
"yum install dovecot-mysql",
"#!include auth-system.conf.ext",
"!include auth-sql.conf.ext",
"userdb { driver = sql args = /etc/dovecot/dovecot-sql.conf.ext override_fields = uid= vmail gid= vmail home= /var/mail/%n/ }",
"driver = mysql connect = host= mariadb_srv.example.com dbname= dovecotDB user= dovecot password= dovecotPW ssl_ca= /etc/pki/tls/certs/ca.crt default_pass_scheme = SHA512-CRYPT user_query = SELECT username FROM users WHERE username ='%u'; password_query = SELECT username AS user, password FROM users WHERE username ='%u'; iterate_query = SELECT username FROM users ;",
"chown root:root /etc/dovecot/dovecot-sql.conf.ext chmod 600 /etc/dovecot/dovecot-sql.conf.ext",
"protocols = imap lmtp",
"firewall-cmd --permanent --add-service=imaps --add-service=imap --add-service=pop3s --add-service=pop3 firewall-cmd --reload",
"systemctl enable --now dovecot",
"doveconf -n",
"mail_plugins = USDmail_plugins notify replication",
"service replicator { process_min_avail = 1 unix_listener replicator-doveadm { mode = 0600 user = vmail } }",
"service aggregator { fifo_listener replication-notify-fifo { user = vmail } unix_listener replication-notify { user = vmail } }",
"service doveadm { inet_listener { port = 12345 } }",
"doveadm_password = replication_password",
"plugin { mail_replica = tcp: server2.example.com : 12345 }",
"replication_max_conns = 20",
"chown root:root /etc/dovecot/conf.d/10-replication.conf chmod 600 /etc/dovecot/conf.d/10-replication.conf",
"setsebool -P nis_enabled on",
"firewall-cmd --permanent --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" 192.0.2.1/32 \" port protocol=\"tcp\" port=\" 12345 \" accept\" firewall-cmd --permanent --zone=public --add-rich-rule=\"rule family=\"ipv6\" source address=\" 2001:db8:2::1/128 \" port protocol=\"tcp\" port=\" 12345 \" accept\" firewall-cmd --reload",
"systemctl reload dovecot",
"doveadm replicator status Queued 'sync' requests 0 Queued 'high' requests 0 Queued 'low' requests 0 Queued 'failed' requests 0 Queued 'full resync' requests 30 Waiting 'failed' requests 0 Total number of known users 75",
"doveadm replicator status example_user username priority fast sync full sync success sync failed example_user none 02:05:28 04:19:07 02:05:28 -",
"namespace inbox { mailbox Drafts { special_use = \\Drafts auto = subscribe } mailbox Junk { special_use = \\Junk auto = subscribe } mailbox Trash { special_use = \\Trash auto = subscribe } mailbox Sent { special_use = \\Sent auto = subscribe } }",
"namespace inbox { mailbox \" Important Emails \" { auto = <value> } }",
"systemctl reload dovecot",
"doveconf -a | egrep \"^protocols\" protocols = imap pop3 lmtp",
"protocols = ... lmtp",
"service lmtp { unix_listener lmtp { mode = 0600 user = postfix group = postfix } }",
"service lmtp { inet_listener lmtp { port = 24 } }",
"firewall-cmd --permanent --zone=public --add-rich-rule=\"rule family=\"ipv4\" source address=\" 192.0.2.1/32 \" port protocol=\"tcp\" port=\" 24 \" accept\" firewall-cmd --permanent --zone=public --add-rich-rule=\"rule family=\"ipv6\" source address=\" 2001:db8:2::1/128 \" port protocol=\"tcp\" port=\" 24 \" accept\" firewall-cmd --reload",
"systemctl reload dovecot",
"ls -l /var/run/dovecot/lmtp s rw------- . 1 postfix postfix 0 Nov 22 17:17 /var/run/dovecot/lmtp",
"protocols = imap lmtp",
"systemctl reload dovecot",
"firewall-cmd --remove-service=pop3s --remove-service=pop3 firewall-cmd --reload",
"ss -tulp | grep dovecot tcp LISTEN 0 100 0.0.0.0:993 0.0.0.0:* users:((\"dovecot\",pid= 1405 ,fd= 44 )) tcp LISTEN 0 100 0.0.0.0:143 0.0.0.0:* users:((\"dovecot\",pid= 1405 ,fd= 42 )) tcp LISTEN 0 100 [::]:993 [::]:* users:((\"dovecot\",pid= 1405 ,fd= 45 )) tcp LISTEN 0 100 [::]:143 [::]:* users:((\"dovecot\",pid= 1405 ,fd= 43 ))",
"yum install dovecot-pigeonhole",
"protocols = USDprotocols sieve",
"firewall-cmd --permanent --add-service=managesieve firewall-cmd --reload",
"systemctl reload dovecot"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/configuring-and-maintaining-a-dovecot-imap-and-pop3-server_Deploying-different-types-of-servers |
Chapter 2. Configuring a GCP project | Chapter 2. Configuring a GCP project Before you can install OpenShift Container Platform, you must configure a Google Cloud Platform (GCP) project to host it. 2.1. Creating a GCP project To install OpenShift Container Platform, you must create a project in your Google Cloud Platform (GCP) account to host the cluster. Procedure Create a project to host your OpenShift Container Platform cluster. See Creating and Managing Projects in the GCP documentation. Important Your GCP project must use the Premium Network Service Tier if you are using installer-provisioned infrastructure. The Standard Network Service Tier is not supported for clusters installed using the installation program. The installation program configures internal load balancing for the api-int.<cluster_name>.<base_domain> URL; the Premium Tier is required for internal load balancing. 2.2. Enabling API services in GCP Your Google Cloud Platform (GCP) project requires access to several API services to complete OpenShift Container Platform installation. Prerequisites You created a project to host your cluster. Procedure Enable the following required API services in the project that hosts your cluster. You may also enable optional API services which are not required for installation. See Enabling services in the GCP documentation. Table 2.1. Required API services API service Console service name Compute Engine API compute.googleapis.com Cloud Resource Manager API cloudresourcemanager.googleapis.com Google DNS API dns.googleapis.com IAM Service Account Credentials API iamcredentials.googleapis.com Identity and Access Management (IAM) API iam.googleapis.com Service Usage API serviceusage.googleapis.com Table 2.2. Optional API services API service Console service name Google Cloud APIs cloudapis.googleapis.com Service Management API servicemanagement.googleapis.com Google Cloud Storage JSON API storage-api.googleapis.com Cloud Storage storage-component.googleapis.com 2.3. Configuring DNS for GCP To install OpenShift Container Platform, the Google Cloud Platform (GCP) account you use must have a dedicated public hosted zone in the same project that you host the OpenShift Container Platform cluster. This zone must be authoritative for the domain. The DNS service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through GCP or another source. Note If you purchase a new domain, it can take time for the relevant DNS changes to propagate. For more information about purchasing domains through Google, see Google Domains . Create a public hosted zone for your domain or subdomain in your GCP project. See Creating public zones in the GCP documentation. Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . Extract the new authoritative name servers from the hosted zone records. See Look up your Cloud DNS name servers in the GCP documentation. You typically have four name servers. Update the registrar records for the name servers that your domain uses. For example, if you registered your domain to Google Domains, see the following topic in the Google Domains Help: How to switch to custom name servers . If you migrated your root domain to Google Cloud DNS, migrate your DNS records. See Migrating to Cloud DNS in the GCP documentation. If you use a subdomain, follow your company's procedures to add its delegation records to the parent domain. This process might include a request to your company's IT department or the division that controls the root domain and DNS services for your company. 2.4. GCP account limits The OpenShift Container Platform cluster uses a number of Google Cloud Platform (GCP) components, but the default Quotas do not affect your ability to install a default OpenShift Container Platform cluster. A default cluster, which contains three compute and three control plane machines, uses the following resources. Note that some resources are required only during the bootstrap process and are removed after the cluster deploys. Table 2.3. GCP resources used in a default cluster Service Component Location Total resources required Resources removed after bootstrap Service account IAM Global 6 1 Firewall rules Compute Global 11 1 Forwarding rules Compute Global 2 0 In-use global IP addresses Compute Global 4 1 Health checks Compute Global 3 0 Images Compute Global 1 0 Networks Compute Global 2 0 Static IP addresses Compute Region 4 1 Routers Compute Global 1 0 Routes Compute Global 2 0 Subnetworks Compute Global 2 0 Target pools Compute Global 3 0 CPUs Compute Region 28 4 Persistent disk SSD (GB) Compute Region 896 128 Note If any of the quotas are insufficient during installation, the installation program displays an error that states both which quota was exceeded and the region. Be sure to consider your actual cluster size, planned cluster growth, and any usage from other clusters that are associated with your account. The CPU, static IP addresses, and persistent disk SSD (storage) quotas are the ones that are most likely to be insufficient. If you plan to deploy your cluster in one of the following regions, you will exceed the maximum storage quota and are likely to exceed the CPU quota limit: asia-east2 asia-northeast2 asia-south1 australia-southeast1 europe-north1 europe-west2 europe-west3 europe-west6 northamerica-northeast1 southamerica-east1 us-west2 You can increase resource quotas from the GCP console , but you might need to file a support ticket. Be sure to plan your cluster size early so that you can allow time to resolve the support ticket before you install your OpenShift Container Platform cluster. 2.5. Creating a service account in GCP OpenShift Container Platform requires a Google Cloud Platform (GCP) service account that provides authentication and authorization to access data in the Google APIs. If you do not have an existing IAM service account that contains the required roles in your project, you must create one. Prerequisites You created a project to host your cluster. Procedure Create a service account in the project that you use to host your OpenShift Container Platform cluster. See Creating a service account in the GCP documentation. Grant the service account the appropriate permissions. You can either grant the individual permissions that follow or assign the Owner role to it. See Granting roles to a service account for specific resources . Note While making the service account an owner of the project is the easiest way to gain the required permissions, it means that service account has complete control over the project. You must determine if the risk that comes from offering that power is acceptable. You can create the service account key in JSON format, or attach the service account to a GCP virtual machine. See Creating service account keys and Creating and enabling service accounts for instances in the GCP documentation. You must have a service account key or a virtual machine with an attached service account to create the cluster. Note If you use a virtual machine with an attached service account to create your cluster, you must set credentialsMode: Manual in the install-config.yaml file before installation. 2.5.1. Required GCP roles When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create a service account with the following permissions. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. If you deploy your cluster into an existing virtual private cloud (VPC), the service account does not require certain networking permissions, which are noted in the following lists: Required roles for the installation program Compute Admin IAM Security Admin Service Account Admin Service Account Key Admin Service Account User Storage Admin Required roles for creating network resources during installation DNS Administrator Required roles for using passthrough credentials mode Compute Load Balancer Admin IAM Role Viewer The roles are applied to the service accounts that the control plane and compute machines use: Table 2.4. GCP service account permissions Account Roles Control Plane roles/compute.instanceAdmin roles/compute.networkAdmin roles/compute.securityAdmin roles/storage.admin roles/iam.serviceAccountUser Compute roles/compute.viewer roles/storage.admin 2.5.2. Required GCP permissions for installer-provisioned infrastructure When you attach the Owner role to the service account that you create, you grant that service account all permissions, including those that are required to install OpenShift Container Platform. If the security policies for your organization require a more restrictive set of permissions, you can create custom roles with the necessary permissions. The following permissions are required for the installer-provisioned infrastructure for creating and deleting the OpenShift Container Platform cluster. Important If you configure the Cloud Credential Operator to operate in passthrough mode, you must use roles rather than granular permissions. For more information, see "Required roles for using passthrough credentials mode" in the "Required GCP roles" section. Example 2.1. Required permissions for creating network resources compute.addresses.create compute.addresses.createInternal compute.addresses.delete compute.addresses.get compute.addresses.list compute.addresses.use compute.addresses.useInternal compute.firewalls.create compute.firewalls.delete compute.firewalls.get compute.firewalls.list compute.forwardingRules.create compute.forwardingRules.get compute.forwardingRules.list compute.forwardingRules.setLabels compute.networks.create compute.networks.get compute.networks.list compute.networks.updatePolicy compute.routers.create compute.routers.get compute.routers.list compute.routers.update compute.routes.list compute.subnetworks.create compute.subnetworks.get compute.subnetworks.list compute.subnetworks.use compute.subnetworks.useExternalIp Example 2.2. Required permissions for creating load balancer resources compute.regionBackendServices.create compute.regionBackendServices.get compute.regionBackendServices.list compute.regionBackendServices.update compute.regionBackendServices.use compute.targetPools.addInstance compute.targetPools.create compute.targetPools.get compute.targetPools.list compute.targetPools.removeInstance compute.targetPools.use Example 2.3. Required permissions for creating DNS resources dns.changes.create dns.changes.get dns.managedZones.create dns.managedZones.get dns.managedZones.list dns.networks.bindPrivateDNSZone dns.resourceRecordSets.create dns.resourceRecordSets.list Example 2.4. Required permissions for creating Service Account resources iam.serviceAccountKeys.create iam.serviceAccountKeys.delete iam.serviceAccountKeys.get iam.serviceAccountKeys.list iam.serviceAccounts.actAs iam.serviceAccounts.create iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.get resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.5. Required permissions for creating compute resources compute.disks.create compute.disks.get compute.disks.list compute.disks.setLabels compute.instanceGroups.create compute.instanceGroups.delete compute.instanceGroups.get compute.instanceGroups.list compute.instanceGroups.update compute.instanceGroups.use compute.instances.create compute.instances.delete compute.instances.get compute.instances.list compute.instances.setLabels compute.instances.setMetadata compute.instances.setServiceAccount compute.instances.setTags compute.instances.use compute.machineTypes.get compute.machineTypes.list Example 2.6. Required for creating storage resources storage.buckets.create storage.buckets.delete storage.buckets.get storage.buckets.list storage.objects.create storage.objects.delete storage.objects.get storage.objects.list Example 2.7. Required permissions for creating health check resources compute.healthChecks.create compute.healthChecks.get compute.healthChecks.list compute.healthChecks.useReadOnly compute.httpHealthChecks.create compute.httpHealthChecks.get compute.httpHealthChecks.list compute.httpHealthChecks.useReadOnly Example 2.8. Required permissions to get GCP zone and region related information compute.globalOperations.get compute.regionOperations.get compute.regions.list compute.zoneOperations.get compute.zones.get compute.zones.list Example 2.9. Required permissions for checking services and quotas monitoring.timeSeries.list serviceusage.quotas.get serviceusage.services.list Example 2.10. Required IAM permissions for installation iam.roles.get Example 2.11. Optional Images permissions for installation compute.images.list Example 2.12. Optional permission for running gather bootstrap compute.instances.getSerialPortOutput Example 2.13. Required permissions for deleting network resources compute.addresses.delete compute.addresses.deleteInternal compute.addresses.list compute.firewalls.delete compute.firewalls.list compute.forwardingRules.delete compute.forwardingRules.list compute.networks.delete compute.networks.list compute.networks.updatePolicy compute.routers.delete compute.routers.list compute.routes.list compute.subnetworks.delete compute.subnetworks.list Example 2.14. Required permissions for deleting load balancer resources compute.regionBackendServices.delete compute.regionBackendServices.list compute.targetPools.delete compute.targetPools.list Example 2.15. Required permissions for deleting DNS resources dns.changes.create dns.managedZones.delete dns.managedZones.get dns.managedZones.list dns.resourceRecordSets.delete dns.resourceRecordSets.list Example 2.16. Required permissions for deleting Service Account resources iam.serviceAccounts.delete iam.serviceAccounts.get iam.serviceAccounts.list resourcemanager.projects.getIamPolicy resourcemanager.projects.setIamPolicy Example 2.17. Required permissions for deleting compute resources compute.disks.delete compute.disks.list compute.instanceGroups.delete compute.instanceGroups.list compute.instances.delete compute.instances.list compute.instances.stop compute.machineTypes.list Example 2.18. Required for deleting storage resources storage.buckets.delete storage.buckets.getIamPolicy storage.buckets.list storage.objects.delete storage.objects.list Example 2.19. Required permissions for deleting health check resources compute.healthChecks.delete compute.healthChecks.list compute.httpHealthChecks.delete compute.httpHealthChecks.list Example 2.20. Required Images permissions for deletion compute.images.list 2.5.3. Required GCP permissions for shared VPC installations When you are installing a cluster to a shared VPC , you must configure the service account for both the host project and the service project. If you are not installing to a shared VPC, you can skip this section. You must apply the minimum roles required for a standard installation as listed above, to the service project. Important You can use granular permissions for a Cloud Credential Operator that operates in either manual or mint credentials mode. You cannot use granular permissions in passthrough credentials mode. Ensure that the host project applies one of the following configurations to the service account: Example 2.21. Required permissions for creating firewalls in the host project projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkAdmin roles/compute.securityAdmin Example 2.22. Required minimal permissions projects/<host-project>/roles/dns.networks.bindPrivateDNSZone roles/compute.networkUser 2.6. Supported GCP regions You can deploy an OpenShift Container Platform cluster to the following Google Cloud Platform (GCP) regions: asia-east1 (Changhua County, Taiwan) asia-east2 (Hong Kong) asia-northeast1 (Tokyo, Japan) asia-northeast2 (Osaka, Japan) asia-northeast3 (Seoul, South Korea) asia-south1 (Mumbai, India) asia-south2 (Delhi, India) asia-southeast1 (Jurong West, Singapore) asia-southeast2 (Jakarta, Indonesia) australia-southeast1 (Sydney, Australia) australia-southeast2 (Melbourne, Australia) europe-central2 (Warsaw, Poland) europe-north1 (Hamina, Finland) europe-southwest1 (Madrid, Spain) europe-west1 (St. Ghislain, Belgium) europe-west2 (London, England, UK) europe-west3 (Frankfurt, Germany) europe-west4 (Eemshaven, Netherlands) europe-west6 (Zurich, Switzerland) europe-west8 (Milan, Italy) europe-west9 (Paris, France) europe-west12 (Turin, Italy) me-central1 (Doha, Qatar, Middle East) me-west1 (Tel Aviv, Israel) northamerica-northeast1 (Montreal, Quebec, Canada) northamerica-northeast2 (Toronto, Ontario, Canada) southamerica-east1 (Sao Paulo, Brazil) southamerica-west1 (Santiago, Chile) us-central1 (Council Bluffs, Iowa, USA) us-east1 (Moncks Corner, South Carolina, USA) us-east4 (Ashburn, Northern Virginia, USA) us-east5 (Columbus, Ohio) us-south1 (Dallas, Texas) us-west1 (The Dalles, Oregon, USA) us-west2 (Los Angeles, California, USA) us-west3 (Salt Lake City, Utah, USA) us-west4 (Las Vegas, Nevada, USA) Note To determine which machine type instances are available by region and zone, see the Google documentation . 2.7. steps Install an OpenShift Container Platform cluster on GCP. You can install a customized cluster or quickly install a cluster with default options. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_gcp/installing-gcp-account |
Chapter 13. File-based configuration | Chapter 13. File-based configuration AMQ Python can read the configuration options used to establish connections from a local file named connect.json . This enables you to configure connections in your application at the time of deployment. The library attempts to read the file when the application calls the container connect method without supplying any connection options. 13.1. File locations If set, AMQ Python uses the value of the MESSAGING_CONNECT_FILE environment variable to locate the configuration file. If MESSAGING_CONNECT_FILE is not set, AMQ Python searches for a file named connect.json at the following locations and in the order shown. It stops at the first match it encounters. On Linux: USDPWD/connect.json , where USDPWD is the current working directory of the client process USDHOME/.config/messaging/connect.json , where USDHOME is the current user home directory /etc/messaging/connect.json On Windows: %cd%/connect.json , where %cd% is the current working directory of the client process If no connect.json file is found, the library uses default values for all options. 13.2. The file format The connect.json file contains JSON data, with additional support for JavaScript comments. All of the configuration attributes are optional or have default values, so a simple example need only provide a few details: Example: A simple connect.json file { "host": "example.com", "user": "alice", "password": "secret" } SASL and SSL/TLS options are nested under "sasl" and "tls" namespaces: Example: A connect.json file with SASL and SSL/TLS options { "host": "example.com", "user": "ortega", "password": "secret", "sasl": { "mechanisms": ["SCRAM-SHA-1", "SCRAM-SHA-256"] }, "tls": { "cert": "/home/ortega/cert.pem", "key": "/home/ortega/key.pem" } } 13.3. Configuration options The option keys containing a dot (.) represent attributes nested inside a namespace. Table 13.1. Configuration options in connect.json Key Value type Default value Description scheme string "amqps" "amqp" for cleartext or "amqps" for SSL/TLS host string "localhost" The hostname or IP address of the remote host port string or number "amqps" A port number or port literal user string None The user name for authentication password string None The password for authentication sasl.mechanisms list or string None (system defaults) A JSON list of enabled SASL mechanisms. A bare string represents one mechanism. If none are specified, the client uses the default mechanisms provided by the system. sasl.allow_insecure boolean false Enable mechanisms that send cleartext passwords tls.cert string None The filename or database ID of the client certificate tls.key string None The filename or database ID of the private key for the client certificate tls.ca string None The filename, directory, or database ID of the CA certificate tls.verify boolean true Require a valid server certificate with a matching hostname | [
"{ \"host\": \"example.com\", \"user\": \"alice\", \"password\": \"secret\" }",
"{ \"host\": \"example.com\", \"user\": \"ortega\", \"password\": \"secret\", \"sasl\": { \"mechanisms\": [\"SCRAM-SHA-1\", \"SCRAM-SHA-256\"] }, \"tls\": { \"cert\": \"/home/ortega/cert.pem\", \"key\": \"/home/ortega/key.pem\" } }"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_python_client/file_based_configuration |
A.9. Enabling Intel VT-x and AMD-V Virtualization Hardware Extensions in BIOS | A.9. Enabling Intel VT-x and AMD-V Virtualization Hardware Extensions in BIOS Note To expand your expertise, you might also be interested in the Red Hat Virtualization (RH318) training course. This section describes how to identify hardware virtualization extensions and enable them in your BIOS if they are disabled. The Intel VT-x extensions can be disabled in the BIOS. Certain laptop vendors have disabled the Intel VT-x extensions by default in their CPUs. The virtualization extensions cannot be disabled in the BIOS for AMD-V. See the following section for instructions on enabling disabled virtualization extensions. Verify the virtualization extensions are enabled in BIOS. The BIOS settings for Intel VT or AMD-V are usually in the Chipset or Processor menus. The menu names may vary from this guide, the virtualization extension settings may be found in Security Settings or other non standard menu names. Procedure A.3. Enabling virtualization extensions in BIOS Reboot the computer and open the system's BIOS menu. This can usually be done by pressing the delete key, the F1 key or Alt and F4 keys depending on the system. Enabling the virtualization extensions in BIOS Note Many of the steps below may vary depending on your motherboard, processor type, chipset and OEM. See your system's accompanying documentation for the correct information on configuring your system. Open the Processor submenu The processor settings menu may be hidden in the Chipset , Advanced CPU Configuration or Northbridge . Enable Intel Virtualization Technology (also known as Intel VT-x). AMD-V extensions cannot be disabled in the BIOS and should already be enabled. The virtualization extensions may be labeled Virtualization Extensions , Vanderpool or various other names depending on the OEM and system BIOS. Enable Intel VT-d or AMD IOMMU, if the options are available. Intel VT-d and AMD IOMMU are used for PCI device assignment. Select Save & Exit . Reboot the machine. When the machine has booted, run grep -E "vmx|svm" /proc/cpuinfo . Specifying --color is optional, but useful if you want the search term highlighted. If the command outputs, the virtualization extensions are now enabled. If there is no output your system may not have the virtualization extensions or the correct BIOS setting enabled. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-enabling_intel_vt_x_and_amd_v_virtualization_hardware_extensions_in_bios |
Chapter 1. Overview | Chapter 1. Overview This book covers the updates from the following CDN channels: Atomic Host - delivers the cumulative, image-based updates for the Atomic Host - the OSTree, as well as updates to the individual RPMs that contain tooling used to build and manage ostrees, and to the OSTree components which enable the use of container applications, for example cockpit-ostree and openscap . However, such RPMs cannot be downloaded and used on Red Hat Enterprise Linux. Extras-7 - delivers updates on container-related RPMs, most of which as also available as part of the OSTree for RHEL Atomic Host. The packages marked with an asterisk (*) are only available for Red Hat Enterprise Linux, and are not part of the Atomic Host OSTree. This channel also delivers updates on the official Container Images based on Red Hat Enterprise Linux. For detailed information on the Red Hat Enterprise Linux Atomic Host cycle, see https://access.redhat.com/support/policy/updates/extras/ . All official Red Hat container images are available from Red Hat Registry . To update you RHEL Atomic Host to the latest OSTree, run the atomic host upgrade command. 1.1. Red Hat Enterprise Linux Atomic Host Red Hat Enterprise Linux Atomic Host is a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers. It is pre-installed with the following tools to support Linux containers: docker - an open source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere atomic - defines the entrypoint for Atomic hosts etcd - provides a highly-available key value store for shared configuration flannel - contains an etcd-driven address management agent, which manages IP addresses of overlay networks between systems running containers that need to communicate with one another Red Hat Enterprise Linux Atomic Host makes use of the following technologies: OSTree and rpm-OSTree - These projects provide atomic upgrades and rollback capability systemd - a new init system for Linux that enables faster boot times and easier orchestration SELinux - enabled by default to provide complete multi-tenant security Also, Cockpit is available on Red Hat Enterprise Linux as a separate Extras package and on Red Hat Enterprise Linux Atomic Host, as the cockpit-ws Container Image. Cockpit is a server administration interface that makes it easy to administer Red Hat Enterprise Linux servers through a web browser. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/overview |
probe::netdev.close | probe::netdev.close Name probe::netdev.close - Called when the device is closed Synopsis netdev.close Values dev_name The device that is going to be closed | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netdev-close |
Chapter 2. Installing and configuring Pipelines as Code | Chapter 2. Installing and configuring Pipelines as Code You can install Pipelines as Code as a part of Red Hat OpenShift Pipelines installation. 2.1. Installing Pipelines as Code on an OpenShift Container Platform Pipelines as Code is installed in the openshift-pipelines namespace when you install the Red Hat OpenShift Pipelines Operator. For more details, see Installing OpenShift Pipelines in the Additional resources section. To disable the default installation of Pipelines as Code with the Operator, set the value of the enable parameter to false in the TektonConfig custom resource. apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: false settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: "false" bitbucket-cloud-check-source-ip: "true" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: "true" secret-auto-create: "true" # ... Optionally, you can run the following command: USD oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": false}}}}}' To enable the default installation of Pipelines as Code with the Red Hat OpenShift Pipelines Operator, set the value of the enable parameter to true in the TektonConfig custom resource: apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: true settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: "false" bitbucket-cloud-check-source-ip: "true" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: "true" secret-auto-create: "true" # ... Optionally, you can run the following command: USD oc patch tektonconfig config --type="merge" -p '{"spec": {"platforms": {"openshift":{"pipelinesAsCode": {"enable": true}}}}}' 2.2. Installing Pipelines as Code CLI Cluster administrators can use the tkn pac and opc CLI tools on local machines or as containers for testing. The tkn pac and opc CLI tools are installed automatically when you install the tkn CLI for Red Hat OpenShift Pipelines. You can install the tkn pac and opc version 1.18.0 binaries for the supported platforms: Linux (x86_64, amd64) Linux on IBM zSystems and IBM(R) LinuxONE (s390x) Linux on IBM Power (ppc64le) Linux on ARM (aarch64, arm64) macOS Windows 2.3. Customizing Pipelines as Code configuration To customize Pipelines as Code, cluster administrators can configure the following parameters in the TektonConfig custom resource, in the platforms.openshift.pipelinesAsCode.settings spec: Table 2.1. Customizing Pipelines as Code configuration Parameter Description Default application-name The name of the application. For example, the name displayed in the GitHub Checks labels. "Pipelines as Code CI" secret-auto-create Indicates whether or not a secret should be automatically created using the token generated in the GitHub application. This secret can then be used with private repositories. enabled remote-tasks When enabled, allows remote tasks from pipeline run annotations. enabled hub-url The base URL for the Tekton Hub API . https://hub.tekton.dev/ hub-catalog-name The Tekton Hub catalog name. tekton tekton-dashboard-url The URL of the Tekton Hub dashboard. Pipelines as Code uses this URL to generate a PipelineRun URL on the Tekton Hub dashboard. NA bitbucket-cloud-check-source-ip Indicates whether to secure the service requests by querying IP ranges for a public Bitbucket. Changing the parameter's default value might result into a security issue. enabled bitbucket-cloud-additional-source-ip Indicates whether to provide an additional set of IP ranges or networks, which are separated by commas. NA max-keep-run-upper-limit A maximum limit for the max-keep-run value for a pipeline run. NA default-max-keep-runs A default limit for the max-keep-run value for a pipeline run. If defined, the value is applied to all pipeline runs that do not have a max-keep-run annotation. NA auto-configure-new-github-repo Configures new GitHub repositories automatically. Pipelines as Code sets up a namespace and creates a custom resource for your repository. This parameter is only supported with GitHub applications. disabled auto-configure-repo-namespace-template Configures a template to automatically generate the namespace for your new repository, if auto-configure-new-github-repo is enabled. {repo_name}-pipelines error-log-snippet Enables or disables the view of a log snippet for the failed tasks, with an error in a pipeline. You can disable this parameter in the case of data leakage from your pipeline. true error-detection-from-container-logs Enables or disables the inspection of container logs to detect error message and expose them as annotations on the pull request. This setting applies only if you are using the GitHub app. true error-detection-max-number-of-lines The maximum number of lines inspected in the container logs to search for error messages. Set to -1 to inspect an unlimited number of lines. 50 secret-github-app-token-scoped If set to true , the GitHub access token that Pipelines as Code generates using the GitHub app is scoped only to the repository from which Pipelines as Code fetches the pipeline definition. If set to false , you can use both the TektonConfig custom resource and the Repository custom resource to scope the token to additional repositories. true secret-github-app-scope-extra-repos Additional repositories for scoping the generated GitHub access token. 2.4. Configuring additional Pipelines as Code controllers to support additional GitHub apps By default, you can configure Pipelines as Code to interact with one GitHub app. In some cases you might need to use more than one GitHub app, for example, if you need to use different GitHub accounts or different GitHub instances such as GitHub Enterprise or GitHub SaaS. If you want to use more than one GitHub app, you must configure an additional Pipelines as Code controller for every additional GitHub app. Procedure In the TektonConfig custom resource, add the additionalPACControllers section to the platforms.openshift.pipelinesAsCode spec, as in the following example: Example additionalPACControllers section apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: additionalPACControllers: pac_controller_2: 1 enable: true 2 secretName: pac_secret_2 3 settings: # 4 # ... 1 The name of the controller. This name must be unique and not exceed 25 characters in length. 2 This parameter is optional. Set this parameter to true to enable the additional controller or to false to disable the additional controller. The default vaule is true . 3 Set this parameter to the name of a secret that you must create for the GitHub app. 4 This section is optional. In this section, you can set any Pipelines as Code settings for this controller if the settings must be different from the main Pipelines as Code controller. Optional: If you want to use more than two GitHub apps, create additional sections under the pipelinesAsCode.additionalPACControllers spec to configure a Pipelines as Code controller for every GitHub instance. Use a unique name for every controller. Additional resources Customizing Pipelines as Code configuration Configuring a GitHub App manually and creating a secret for Pipelines as Code 2.5. Additional resources Installing OpenShift Pipelines Installing tkn | [
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: false settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: \"false\" bitbucket-cloud-check-source-ip: \"true\" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: \"true\" secret-auto-create: \"true\"",
"oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": false}}}}}'",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: enable: true settings: application-name: Pipelines as Code CI auto-configure-new-github-repo: \"false\" bitbucket-cloud-check-source-ip: \"true\" hub-catalog-name: tekton hub-url: https://api.hub.tekton.dev/v1 remote-tasks: \"true\" secret-auto-create: \"true\"",
"oc patch tektonconfig config --type=\"merge\" -p '{\"spec\": {\"platforms\": {\"openshift\":{\"pipelinesAsCode\": {\"enable\": true}}}}}'",
"apiVersion: operator.tekton.dev/v1 kind: TektonConfig metadata: name: config spec: platforms: openshift: pipelinesAsCode: additionalPACControllers: pac_controller_2: 1 enable: true 2 secretName: pac_secret_2 3 settings: # 4"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/pipelines_as_code/install-config-pipelines-as-code |
5.3. Viewing the Current Status and Settings of firewalld | 5.3. Viewing the Current Status and Settings of firewalld 5.3.1. Viewing the Current Status of firewalld The firewall service, firewalld , is installed on the system by default. Use the firewalld CLI interface to check that the service is running. To see the status of the service: For more information about the service status, use the systemctl status sub-command: Furthermore, it is important to know how firewalld is set up and which rules are in force before you try to edit the settings. To display the firewall settings, see Section 5.3.2, "Viewing Current firewalld Settings" 5.3.2. Viewing Current firewalld Settings 5.3.2.1. Viewing Allowed Services using GUI To view the list of services using the graphical firewall-config tool, press the Super key to enter the Activities Overview, type firewall , and press Enter . The firewall-config tool appears. You can now view the list of services under the Services tab. Alternatively, to start the graphical firewall configuration tool using the command-line, enter the following command: The Firewall Configuration window opens. Note that this command can be run as a normal user, but you are prompted for an administrator password occasionally. Figure 5.2. The Services tab in firewall-config 5.3.2.2. Viewing firewalld Settings using CLI With the CLI client, it is possible to get different views of the current firewall settings. The --list-all option shows a complete overview of the firewalld settings. firewalld uses zones to manage the traffic. If a zone is not specified by the --zone option, the command is effective in the default zone assigned to the active network interface and connection. To list all the relevant information for the default zone: Note To specify the zone for which to display the settings, add the --zone= zone-name argument to the firewall-cmd --list-all command, for example: To see the settings for particular information, such as services or ports, use a specific option. See the firewalld manual pages or get a list of the options using the command help: For example, to see which services are allowed in the current zone: Listing the settings for a certain subpart using the CLI tool can sometimes be difficult to interpret. For example, you allow the SSH service and firewalld opens the necessary port (22) for the service. Later, if you list the allowed services, the list shows the SSH service, but if you list open ports, it does not show any. Therefore, it is recommended to use the --list-all option to make sure you receive a complete information. | [
"~]# firewall-cmd --state",
"~]# systemctl status firewalld firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor pr Active: active (running) since Mon 2017-12-18 16:05:15 CET; 50min ago Docs: man:firewalld(1) Main PID: 705 (firewalld) Tasks: 2 (limit: 4915) CGroup: /system.slice/firewalld.service └─705 /usr/bin/python3 -Es /usr/sbin/firewalld --nofork --nopid",
"~]USD firewall-config",
"~]# firewall-cmd --list-all public target: default icmp-block-inversion: no interfaces: sources: services: ssh dhcpv6-client ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:",
"~]# firewall-cmd --list-all --zone=home home target: default icmp-block-inversion: no interfaces: sources: services: ssh mdns samba-client dhcpv6-client ... [output truncated]",
"~]# firewall-cmd --help Usage: firewall-cmd [OPTIONS...] General Options -h, --help Prints a short help text and exists -V, --version Print the version string of firewalld -q, --quiet Do not print status messages Status Options --state Return and print firewalld state --reload Reload firewall and keep state information ... [output truncated]",
"~]# firewall-cmd --list-services ssh dhcpv6-client"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Viewing_Current_Status_and_Settings_of_firewalld |
Chapter 7. Block I/O | Chapter 7. Block I/O This chapter covers optimizing I/O settings in virtualized environments. 7.1. Block I/O Tuning The virsh blkiotune command allows administrators to set or display a guest virtual machine's block I/O parameters manually in the <blkio> element in the guest XML configuration. To display current <blkio> parameters for a virtual machine: To set a virtual machine's <blkio> parameters, use the virsh blkiotune command and replace option values according to your environment: Parameters include: weight The I/O weight, within the range 100 to 1000. Increasing the I/O weight of a device increases its priority for I/O bandwidth, and therefore provides it with more host resources. Similarly, reducing a device's weight makes it consume less host resources. device-weights A single string listing one or more device/weight pairs, in the format of /path/to/device ,weight, /path/to/device ,weight . Each weight must be within the range 100-1000, or the value 0 to remove that device from per-device listings. Only the devices listed in the string are modified; any existing per-device weights for other devices remain unchanged. config Add the --config option for changes to take effect at boot. live Add the --live option to apply the changes to the running virtual machine. Note The --live option requires the hypervisor to support this action. Not all hypervisors allow live changes of the maximum memory limit. current Add the --current option to apply the changes to the current virtual machine. For example, the following changes the weight of the /dev/sda device in the liftbrul VM to 500. Note Use the virsh help blkiotune command for more information on using the virsh blkiotune command. | [
"virsh blkiotune virtual_machine",
"virsh blkiotune virtual_machine [--weight number ] [--device-weights string ] [--config] [--live] [--current]",
"virsh blkiotune liftbrul --device-weights /dev/sda, 500"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/chap-virtualization_tuning_optimization_guide-blockio |
18.6. Configuring RAID Sets | 18.6. Configuring RAID Sets Most RAID sets are configured during creation, typically through the firmware menu or from the installer. In some cases, you may need to create or modify RAID sets after installing the system, preferably without having to reboot the machine and enter the firmware menu to do so. Some hardware RAID controllers allow you to configure RAID sets on-the-fly or even define completely new sets after adding extra disks. This requires the use of driver-specific utilities, as there is no standard API for this. For more information, see your hardware RAID controller's driver documentation for information on this. mdadm The mdadm command-line tool is used to manage software RAID in Linux, i.e. mdraid . For information on the different mdadm modes and options, see man mdadm . The man page also contains useful examples for common operations like creating, monitoring, and assembling software RAID arrays. dmraid As the name suggests, dmraid is used to manage device-mapper RAID sets. The dmraid tool finds ATARAID devices using multiple metadata format handlers, each supporting various formats. For a complete list of supported formats, run dmraid -l . As mentioned earlier in Section 18.3, "Linux RAID Subsystems" , the dmraid tool cannot configure RAID sets after creation. For more information about using dmraid , see man dmraid . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/raidset-config |
Chapter 59. Replace Field Action | Chapter 59. Replace Field Action Replace field with a different key in the message in transit. The required parameter 'renames' is a comma-separated list of colon-delimited renaming pairs like for example 'foo:bar,abc:xyz' and it represents the field rename mappings. The optional parameter 'enabled' represents the fields to include. If specified, only the named fields will be included in the resulting message. The optional parameter 'disabled' represents the fields to exclude. If specified, the listed fields will be excluded from the resulting message. This takes precedence over the 'enabled' parameter. The default value of 'enabled' parameter is 'all', so all the fields of the payload will be included. The default value of 'disabled' parameter is 'none', so no fields of the payload will be excluded. 59.1. Configuration Options The following table summarizes the configuration options available for the replace-field-action Kamelet: Property Name Description Type Default Example renames * Renames Comma separated list of field with new value to be renamed string "foo:bar,c1:c2" disabled Disabled Comma separated list of fields to be disabled string "none" enabled Enabled Comma separated list of fields to be enabled string "all" Note Fields marked with an asterisk (*) are mandatory. 59.2. Dependencies At runtime, the replace-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:core camel:jackson camel:kamelet 59.3. Usage This section describes how you can use the replace-field-action . 59.3.1. Knative Action You can use the replace-field-action Kamelet as an intermediate step in a Knative binding. replace-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 59.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 59.3.1.2. Procedure for using the cluster CLI Save the replace-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f replace-field-action-binding.yaml 59.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 59.3.2. Kafka Action You can use the replace-field-action Kamelet as an intermediate step in a Kafka binding. replace-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 59.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 59.3.2.2. Procedure for using the cluster CLI Save the replace-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f replace-field-action-binding.yaml 59.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 59.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/replace-field-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: \"foo:bar,c1:c2\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f replace-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step replace-field-action -p \"step-0.renames=foo:bar,c1:c2\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: \"foo:bar,c1:c2\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f replace-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step replace-field-action -p \"step-0.renames=foo:bar,c1:c2\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/kamelets_reference/replace-field-action |
Chapter 1. Getting started with Red Hat build of Quarkus | Chapter 1. Getting started with Red Hat build of Quarkus As an application developer, you can use Red Hat build of Quarkus to create microservices-based applications written in Java that run on OpenShift environments. Quarkus applications can run on top of a Java virtual machine (JVM) or be compiled to native executables. Native applications have a smaller memory footprint and a faster startup time than their JVM counterpart. You can create a Quarkus application in either of the following ways: Using Apache Maven and the Quarkus Maven plugin Using code.quarkus.redhat.com Using the Quarkus command-line interface (CLI) You can get started with Quarkus and create, test, package, and run a simple Quarkus project that exposes a hello HTTP endpoint. To demonstrate dependency injection, the hello HTTP endpoint uses a greeting bean. Note For a completed example of the getting started exercise, download the Quarkus quickstart archive or clone the Quarkus Quickstarts Git repository and go to the getting-started directory. 1.1. About Red Hat build of Quarkus Red Hat build of Quarkus is a Kubernetes-native Java stack optimized for containers and Red Hat OpenShift Container Platform. Quarkus is designed to work with popular Java standards, frameworks, and libraries such as Eclipse MicroProfile, Eclipse Vert.x, Apache Camel, Apache Kafka, Hibernate ORM with Jakarta Persistence, and RESTEasy Reactive (Jakarta REST). As a developer, you can choose the Java frameworks you want for your Java applications, which you can run in Java Virtual Machine (JVM) mode or compile and run in native mode. Quarkus provides a container-first approach to building Java applications. The container-first approach facilitates the containerization and efficient execution of microservices and functions. For this reason, Quarkus applications have a smaller memory footprint and faster startup times. Quarkus also optimizes the application development process with capabilities such as unified configuration, automatic provisioning of unconfigured services, live coding, and continuous testing that gives you instant feedback on your code changes. For information about the differences between the Quarkus community version and Red Hat build of Quarkus, see Differences between the Red Hat build of Quarkus community version and Red Hat build of Quarkus . 1.2. Preparing your environment Before you start using Quarkus, you must prepare your environment. Procedure Confirm the following installations are completed on your system: You have installed OpenJDK 11 or 17 and set the JAVA_HOME environment variable to specify the location of the Java SDK. To download Red Hat build of OpenJDK, log in to the Red Hat Customer Portal and go to Software Downloads . You have installed Apache Maven 3.8.6 or later. Apache Maven is available from the Apache Maven Project website. Optional : If you want to use the Quarkus command-line interface (CLI), ensure that it is installed. For instructions on how to install the Quarkus CLI, refer to the community-specific information at Quarkus CLI . Important The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. 1.2.1. About Red Hat build of Quarkus BOMs From Red Hat build of Quarkus 2.2, dependency versions of all core Quarkus extensions are managed by using the com.redhat.quarkus.platform:quarkus-bom file. The purpose of the Bill of Materials (BOM) file is to manage dependency versions of Quarkus artifacts in your project so that when you use a BOM in your project, you do not need to specify which dependency versions work together. Instead, you can import the Quarkus BOM file to the pom.xml configuration file, where the dependency versions are included in the <dependencyManagement> section. Therefore, you do not need to list the versions of individual Quarkus dependencies that are managed by the specified BOM in the pom.xml file. To view information about supported extension-specific BOMs that are available with Red Hat build of Quarkus, see Red Hat build of Quarkus Component details . You only need to import the member-specific BOM for the platform-member extensions that you use in your application. Therefore, you have fewer dependencies to manage as compared to a monolithic single BOM. Because every member-specific BOM is a fragment of the universal Quarkus BOM, you can import the member BOMs in any order without creating a conflict. 1.2.2. About Apache Maven and Red Hat build of Quarkus Apache Maven is a distributed build automation tool that is used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output by using an XML file, ensuring that the project gets built correctly and uniformly. Maven repositories A Maven repository stores Java libraries, plugins, and other build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be private and internal within a company to share common artifacts among development teams. Repositories are also available from third parties. You can use the Red Hat-hosted Maven repository with your Quarkus projects, or you can download the Red Hat build of Quarkus Maven repository. Maven plugins Maven plugins are defined parts of a POM file that run one or more tasks. Red Hat build of Quarkus applications use the following Maven plugins: Quarkus Maven plugin ( quarkus-maven-plugin ) : Enables Maven to create Quarkus projects, packages your applications into JAR files, and provides a dev mode. Maven Surefire plugin ( maven-surefire-plugin ) : When Quarkus enables the test profile, the Maven Surefire plugin is used during the test phase of the build lifecycle to run unit tests on your application. The plugin generates text and XML files that contain the test reports. Additional resources Configuring your Red Hat build of Quarkus applications 1.2.3. Configuring the Maven settings.xml file for the online repository To use the Red Hat-hosted Quarkus repository with your Quarkus Maven project, configure the settings.xml file for your user. Maven settings that are used with a repository manager or a repository on a shared server offer better control and manageability of projects. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. If you want to apply the configuration to a specific project only, use the -s option and specify the path to the project-specific settings.xml file. Procedure Open the Maven USDHOME/.m2/settings.xml file in a text editor or an integrated development environment (IDE). Note If no settings.xml file is present in the USDHOME/.m2/ directory, copy the settings.xml file from the USDMAVEN_HOME/conf/ directory into the USDHOME/.m2/ directory. Add the following lines to the <profiles> element of the settings.xml file: <!-- Configure the Red Hat build of Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> 1.2.4. Reconfiguring your Maven project to Red Hat build of Quarkus You can migrate a Quarkus community project to Red Hat build of Quarkus by changing the Maven configuration in your project POM file. Prerequisites You have a Quarkus project built with Maven that depends on Quarkus community artifacts in the pom.xml file. Procedure Change the following values in the <properties> section of the pom.xml file of your project: Change the value of the <quarkus.platform.group-id> property to com.redhat.quarkus.platform . Change the value of the <quarkus.platform.version> property to 3.2.12.SP1-redhat-00003 . pom.xml <project> ... <properties> ... <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.version>3.2.12.SP1-redhat-00003</quarkus.platform.version> ... </properties> ... </project> 1.3. Configuring Red Hat build of Quarkus developer tools By using Quarkus developer tools, you can complete tasks such as: Creating a Maven project for your application Adding and configuring an extension to use in your application Deploying your application on an OpenShift cluster 1.3.1. Configuring Red Hat build of Quarkus extension registry client The extension registry, registry.quarkus.redhat.com , hosts the Quarkus extensions that Red Hat provides. You can configure your Quarkus developer tools to access extensions in this registry by adding the registry to your registry client configuration file. The registry client configuration file is a YAML file that contains a list of registries. Note The default Quarkus registry is registry.quarkus.io ; typically, you do not need to configure it. However, if a user provides a custom registry list and registry.quarkus.io is not on it, then registry.quarkus.io is not enabled. Ensure that the registry you prefer appears first on the registry list. When Quarkus developer tools search for registries, they begin at the top of the list. Procedure Open the config.yaml file that contains your extension registry configuration. When you configure your extension registries for the first time, you might need to create a config.yaml file in the <user_home_directory_name> /.quarkus directory on your machine. Add the new registry to the config.yaml file. For example: config.yaml registries: - registry.quarkus.redhat.com - registry.quarkus.io 1.4. Creating the Getting Started project By creating a getting-started project, you can get up and running with a simple Quarkus application. You can create a getting-started project in one of the following ways: Using Apache Maven and the Quarkus Maven plugin Using code.quarkus.redhat.com to generate a Quarkus Maven project Using the Quarkus command-line interface (CLI) Prerequisites You have prepared your environment. For more information, see Preparing your environment . Procedure Depending on your requirements, select the method you want to use to create your getting-started project. 1.4.1. Creating the Getting Started project by using Apache Maven You can create a getting-started project by using Apache Maven and the Quarkus Maven plugin. With this getting-started project, you can get up and running with a simple Quarkus application. Prerequisites You have prepared your environment to use Maven. For more information, see Preparing your environment . You have configured your Quarkus Maven repository. To create a Quarkus application with Maven, use the Red Hat-hosted Quarkus repository. For more information, see Configuring the Maven settings.xml file for the online repository . Procedure To verify that Maven is using OpenJDK 11 or 17, that the Maven version is 3.8.6 or later, and that mvn is accessible from the PATH environment variable, enter the following command: If the preceding command does not return OpenJDK 11 or 17, add the path to OpenJDK 11 or 17 to the PATH environment variable and enter the preceding command again. To generate the project, enter one of the following commands: If you are using Linux or Apple macOS, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create \ -DprojectGroupId=org.acme \ -DprojectArtifactId=getting-started \ -DplatformGroupId=com.redhat.quarkus.platform \ -DplatformVersion=3.2.12.SP1-redhat-00003 \ -DclassName="org.acme.quickstart.GreetingResource" \ -Dpath="/hello" cd getting-started If you are using the Microsoft Windows command line, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create -DprojectGroupId=org.acme -DprojectArtifactId=getting-started -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=3.2.12.SP1-redhat-00003 -DclassName="org.acme.quickstart.GreetingResource" -Dpath="/hello" If you are using the Microsoft Windows PowerShell, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create "-DprojectGroupId=org.acme" "-DprojectArtifactId=getting-started" "-DplatformVersion=3.2.12.SP1-redhat-00003" "-DplatformGroupId=com.redhat.quarkus.platform" "-DclassName=org.acme.quickstart.GreetingResource" "-Dpath=/hello" These commands create the following elements in the ./getting-started directory: The Maven project directory structure An org.acme.quickstart.GreetingResource resource exposed on /hello Associated unit tests for testing your application in native mode and JVM mode A landing page that is accessible on http://localhost:8080 after you start the application Example Dockerfiles in the src/main/docker directory The application configuration file Note Because Mandrel does not support macOS, you can use Oracle GraalVM to build native executables on this operating system. You can also build native executables by using Oracle GraalVM directly on bare metal Linux or Windows distributions. For more information about this process, see the Oracle GraalVM README and release notes. For more information about supported configurations, see Red Hat build of Quarkus Supported configurations . After the directory structure is created, open the pom.xml file in a text editor and examine the contents of the file: pom.xml <project> ... <properties> ... <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version>3.2.12.SP1-redhat-00003</quarkus.platform.version> ... </properties> ... <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ... <build> ... <plugins> ... <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> ... </plugins> ... </build> ... </project> The <dependencyManagement> section of the pom.xml file contains the Quarkus BOM. Therefore, you do not need to list the versions of individual Quarkus dependencies in the pom.xml file. In this configuration file, you can also find the quarkus-maven-plugin plugin that is responsible for packaging the application. Review the quarkus-resteasy-reactive dependency in the pom.xml file. This dependency enables you to develop REST applications: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-reactive</artifactId> </dependency> Review the src/main/java/org/acme/quickstart/GreetingResource.java file: import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path("/hello") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return "Hello from RESTEasy Reactive"; } } This file contains a simple REST endpoint that returns Hello from RESTEasy Reactive as a response to a request that you send to the /hello endpoint. Note With Quarkus, the Application class for Jakarta REST (formerly known as JAX-RS) is supported but not required. In addition, only one instance of the GreetingResource class is created and not one per request. You can configure this by using different *Scoped annotations, for example ApplicationScoped , RequestScoped , and so on. 1.4.2. Creating the Getting Started project by using code.quarkus.redhat.com As an application developer, you can use code.quarkus.redhat.com to generate a Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. In addition, code.quarkus.redhat.com automatically manages the configuration parameters that are required to compile your project into a native executable. You can generate a Quarkus Maven project, including the following activities: Specifying basic details about your application Choosing the extensions that you want to include in your project Generating a downloadable archive with your project files Using custom commands for compiling and starting your application Prerequisites You have a web browser. You have prepared your environment to use Apache Maven. For more information, see Preparing your environment . You have configured your Quarkus Maven repository. To create a Quarkus application with Maven, use the Red Hat-hosted Quarkus repository. For more information, see Configuring the Maven settings.xml file for the online repository . Optional : You have installed the Quarkus command-line interface (CLI), which is one of the methods you can use to start Quarkus in dev mode. For more information, see Installing the Quarkus CLI . Note The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Procedure On your web browser, navigate to https://code.quarkus.redhat.com . Specify basic details about your project: Enter a group name for your project. The name format follows the Java package naming convention; for example, org.acme . Enter a name for the Maven artifacts generated by your project, such as code-with-quarkus . Select the build tool you want to use to compile and start your application. The build tool that you choose determines the following setups: The directory structure of your generated project The format of configuration files that are used in your generated project The custom build script and command for compiling and starting your application that code.quarkus.redhat.com displays for you after you generate your project Note Red Hat provides support for using code.quarkus.redhat.com to create Quarkus Maven projects only. Specify additional details about your application project: To display the fields that contain further application details, select More options . Enter a version you want to use for artifacts generated by your project. The default value of this field is 1.0.0-SNAPSHOT . Using semantic versioning is recommended; however, you can choose to specify a different type of versioning. Select whether you want code.quarkus.redhat.com to add starter code to your project. When you add extensions that are marked with " STARTER-CODE " to your project, you can enable this option to automatically create example class and resource files for those extensions when you generate your project. However, this option does not affect your generated project if you do not add any extensions that provide an example code. Note The code.quarkus.redhat.com application automatically uses the latest release of Red Hat build of Quarkus. However, should you require, it is possible to manually change to an earlier BOM version in the pom.xml file after you generate your project, but this is not recommended. Select the extensions that you want to use. The extensions you select are included as dependencies of your Quarkus application. The Quarkus platform also ensures these extensions are compatible with future versions. Important Do not use the RESTEasy and the RESTEasy Reactive extensions in the same project. The quark icon ( ) to an extension indicates that the extension is part of the Red Hat build of Quarkus platform release. Red Hat recommends using extensions from the same platform because they are tested and verified together and are therefore easier to use and upgrade. You can enable the option to automatically generate starter code for extensions marked with " STARTER-CODE ". To confirm your choices, select Generate your application . The following items are displayed: A link to download the archive that contains your generated project A custom command that you can use to compile and start your application To save the archive with the generated project files to your machine, select Download the ZIP . Extract the contents of the archive. Go to the directory that contains your extracted project files: cd <directory_name> To compile and start your application in dev mode, use one of the following ways: Using Maven: Using the Quarkus CLI: 1.4.2.1. Support levels for Red Hat build of Quarkus extensions Red Hat provides different levels of support for extensions that are available on code.quarkus.redhat.com that you can add to your Quarkus project. Labels to the name of each extension indicate the support level. Note Red Hat does not support unlabeled extensions for use in production environments. Red Hat provides the following levels of support for Quarkus extensions: Table 1.1. Support levels provided by Red Hat for Red Hat build of Quarkus extensions Support level Description SUPPORTED Red Hat fully supports extensions for use in enterprise applications in production environments. TECH-PREVIEW Red Hat offers limited support to extensions in production environments under the Technology Preview Features Support Scope . DEV-SUPPORT Red Hat does not support extensions for use in production environments, but Red Hat developers support the core functionality that they provide for use in developing new applications. DEPRECATED Red Hat plans to replace extensions with more recent technology or implementation that provides the same functionality. STARTER-CODE You can automatically generate the example code for extensions. By clicking the arrow icon (⌄) beside each of the extensions, you can expand the overflow menu to access further actions for that extension. For example: Add the extension to an existing project by using the Quarkus Maven plugin on the command line Copy an XML snippet to add the extension to a project's pom.xml file Obtain the groupId , artifactId , and version of each extension Open the extension guide 1.4.3. Creating the Getting Started project by using the Red Hat build of Quarkus CLI You can create your getting-started project by using the Quarkus command-line interface (CLI). With the Quarkus CLI, you can create projects, manage extensions, and run build and development commands. Important The Quarkus CLI is intended for dev mode only. Red Hat does not support using the Quarkus CLI in production environments. Prerequisites You have the Quarkus CLI installed. For more information, see Preparing your environment . You have configured your Quarkus developer tools to access extensions in the extension registry. For more information, see Configuring Red Hat build of Quarkus extension registry client . Procedure To generate the project, in a command terminal, enter the following command: Note You can also specify the 'app' subcommand, for example, quarkus create app . However, it is not mandatory to do so because the 'app' subcommand is implied if it is not specified. With this command, the Quarkus project is created in a folder called 'code-with-quarkus' in your current working directory. By default, the groupId , artifactId , and version attributes are specified with the following default values: groupId='org.acme' artifactId='code-with-quarkus' version='1.0.0-SNAPSHOT' To change the values of the groupId , artifactId , and version attributes, issue the quarkus create command and specify the following syntax on the CLI: groupId:artifactId:version For example, quarkus create app mygroupId:myartifactid:version Note To view information about all the available Quarkus commands, specify the help parameter: Review the src/main/java/org/acme/GreetingResource.java file in a text editor: This file contains a simple REST endpoint that returns Hello from RESTEasy Reactive as a response to a request that you send to the /hello endpoint. Verification Compile and start your application in dev mode. For more information, see Compiling and starting the Red Hat build of Quarkus Getting Started project . Package and run your Getting Started project from the Quarkus CLI. For more information, see Packaging and running the Red Hat build of Quarkus Getting Started application . 1.5. Compiling and starting the Red Hat build of Quarkus Getting Started project After you create the Quarkus Getting Started project, you can compile the Hello application and verify that the hello endpoint returns "Hello from RESTEasy Reactive . This procedure uses the Quarkus built-in dev mode, so you can update the application sources and configurations while your application is running. The changes you make appear in the running application. Note The command that you use to compile your Quarkus Hello application depends on the developer tool that you installed on the machine. Prerequisites You have created the Quarkus Getting Started project. Procedure Go to the project directory. To compile the Quarkus Hello application in dev mode, use one of the following methods, depending on the developer tool that you intend to use: If you prefer to use Apache Maven, enter the following command: If you prefer to use the Quarkus command-line interface (CLI), enter the following command: If you prefer to use the Maven wrapper, enter the following command: Expected output The following extract shows an example of the expected output: Verification To send a request to the endpoint that is provided by the application, enter the following command in a new terminal window: Note The "\n" attribute automatically adds a new line before the output of the command, which prevents your terminal from printing a '%' character or putting both the result and the shell prompt on the same line. 1.6. Using Red Hat build of Quarkus dependency injection Dependency injection enables a service to be used in a way that is completely independent of any client consumption. It separates the creation of client dependencies from the client's behavior, which enables program designs to be loosely coupled. Dependency injection in Red Hat build of Quarkus is based on Quarkus ArC, which is a Contexts and Dependency Injection (CDI)-based build-time oriented dependency injection solution that is tailored for Quarkus architecture. Because ArC is a transitive dependency of quarkus-resteasy , and because quarkus-resteasy is a dependency of your project, ArC is downloaded already. Prerequisites You have created the Quarkus Getting Started project. Procedure To modify the application and add a companion bean, create the src/main/java/org/acme/quickstart/GreetingService.java file with the following content: package org.acme.quickstart; import jakarta.enterprise.context.ApplicationScoped; @ApplicationScoped public class GreetingService { public String greeting(String name) { return "hello " + name; } } Edit the src/main/java/org/acme/quickstart/GreetingResource.java to inject the GreetingService and use it to create a new endpoint: package org.acme.quickstart; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.jboss.resteasy.annotations.jaxrs.PathParam; @Path("/hello") public class GreetingResource { @Inject GreetingService service; @GET @Produces(MediaType.TEXT_PLAIN) @Path("/greeting/{name}") public String greeting(@PathParam String name) { return service.greeting(name); } @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return "Hello from RESTEasy Reactive"; } } If you stopped the application, enter the following command to restart it: ./mvnw quarkus:dev To verify that the endpoint returns hello quarkus , enter the following command in a new terminal window: curl -w "\n" http://localhost:8080/hello/greeting/quarkus hello quarkus 1.7. Testing your Red Hat build of Quarkus application After you compile your Quarkus Getting Started project, you can verify that it runs as expected by testing your application with the JUnit 5 framework. Note Alternatively, you can enable continuous testing of your Quarkus application. For more information, see Enabling and running continuous testing . The Quarkus project generates the following two test dependencies in the pom.xml file: quarkus-junit5 : Required for testing because it provides the @QuarkusTest annotation that controls the JUnit 5 testing framework. rest-assured : The rest-assured dependency is not required but, because it provides a convenient way to test HTTP endpoints, it is integrated. The rest-assured dependency automatically sets the correct URL, so no configuration is required. Example pom.xml file: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> Note These tests use the REST-Assured framework, but you can use a different library if you prefer. Prerequisites You have compiled the Quarkus Getting Started project. For more information, see Compiling and starting the Red Hat build of Quarkus Getting Started project . Procedure Open the generated pom.xml file and review the contents: <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariable> </configuration> </plugin> Note the values of the following properties: The java.util.logging.manager system property is set to ensure that your application uses the correct log manager for the test. The maven.home property points to the location of the settings.xml file, in which you can store the custom Maven configuration that you want to apply to your project. Edit the src/test/java/org/acme/quickstart/GreetingResourceTest.java file to match the following content: package org.acme.quickstart; import io.quarkus.test.junit.QuarkusTest; import org.junit.jupiter.api.Test; import java.util.UUID; import static io.restassured.RestAssured.given; import static org.hamcrest.CoreMatchers.is; @QuarkusTest public class GreetingResourceTest { @Test public void testHelloEndpoint() { given() .when().get("/hello") .then() .statusCode(200) .body(is("Hello from RESTEasy Reactive")); } @Test public void testGreetingEndpoint() { String uuid = UUID.randomUUID().toString(); given() .pathParam("name", uuid) .when().get("/hello/greeting/{name}") .then() .statusCode(200) .body(is("hello " + uuid)); } } Note By using the QuarkusTest runner, you instruct JUnit to start the application before starting the tests. To run the tests from Maven, enter the following command: ./mvnw test Note You can also run the tests from your IDE. If you do this, stop the application first. By default, tests run on port 8081 so they do not conflict with the running application. In Quarkus, the RestAssured dependency is configured to use this port. Note If you want to use a different client, use the @TestHTTPResource annotation to directly inject the URL of the tested application into a field in the Test class. This field can be of type String , URL , or URI . You can also enter the test path in the @TestHTTPResource annotation. For example, to test a servlet that is mapped to /myservlet , add the following lines to your test: @TestHTTPResource("/myservlet") URL testUrl; If necessary, specify the test port in the quarkus.http.test-port configuration property. 1.8. Enabling and running continuous testing With Red Hat build of Quarkus, you can continuously test your code changes as you develop your applications. Quarkus provides a continuous testing feature, which you can run immediately after you make and save a change to the code. When you run continuous testing, testing is paused after you start the application. You can resume the testing as soon as the application starts. The Quarkus application determines which tests run so that tests are run only on code that has changed. The continuous testing feature of Quarkus is enabled by default. You can choose to disable continuous testing by setting the quarkus.test.continuous-testing property in the src/main/resources/application.properties file to disabled . Note If you disabled continuous testing previously and want to enable it again, you must restart your Quarkus application before you can start testing. Prerequisites You have compiled the Quarkus Getting Started application (or any other application). For more information, see Compiling and starting the Red Hat build of Quarkus Getting Started project . Procedure Start your Quarkus application. If you created your Getting Project project by using the code.quarkus.redhat.com or the Quarkus CLI, the Maven wrapper is provided when you generate the project. Enter the following command from your project directory: If you created your Getting Project project by using Apache Maven, which is installed on your machine, enter the following command: If you are running continuous testing in dev mode and are using the Quarkus CLI, enter the following command: View details of the testing status in the generated output log. Note To view the output log, you might need to scroll to the bottom of the screen. When continuous testing is enabled, the following message is displayed: When continuous testing is paused, the following message is displayed: Note By default, when continuous testing is enabled, testing is paused after you start the application. To view the keyboard commands that are available for controlling how you run your tests, see Commands for controlling continuous testing . To start running the tests, press `r ` on your keyboard. View the updated output log to monitor the test status and test results, check test statistics, and get guidance for follow-up actions. For example: Verification Make a code change. For example, in a text editor, open the src/main/java/org/acme/quickstart/GreetingsResource.java file. Change the "hello" endpoint to return "Hello world" and save the file. Verify that Quarkus immediately re-runs the test to test the changed code. View the output log to check the test results. In this example, the test checks whether the changed string contains the value "Hello from RESTEasy Reactive". The test fails because the string was changed to "Hello world". To exit continuous testing, press Ctrl-C or 'q' on your keyboard. Note If you change the value back to "hello" again, the test automatically runs again. 1.8.1. Commands for controlling continuous testing You can use hotkey commands on your keyboard to control your options for continuous testing. To view the full list of commands, press 'h' on your keyboard. The following options are available: Command Description r Re-run all tests. f Re-run all tests that failed. b Toggle 'broken only' mode. Only the tests that were failing previously are run, even if other tests are affected by your code changes. This option might be useful if you change code that is used by many tests, but you want to only review the failed tests. v Print output detailing test failures from the last test run to the console. This option might be useful if there was a considerable amount of console output since the last test run. p Pause running tests temporarily. This might be useful if you are making a lot of code changes, but do not want to get test feedback until you finish making the changes. q Exit continuous testing. o Print test output to the console. This is disabled by default. When test output is disabled, the output is filtered and saved, but not displayed on the console. You can view the test output on the Development UI. i Toggle instrumentation-based reload. Using this option does not directly affect testing, but does allow live reload to occur. This might be useful to avoid a restart if a change does not affect the structure of a class. l Toggle live reload. Using this option does not directly affect testing, but enables you to turn live reloading on and off. s Force restart. Using this option, you can force a scan of changed files and a live reload that includes the changes. Note that even if there are no code changes and live reload is disabled, the application still restarts. 1.9. Packaging and running the Red Hat build of Quarkus Getting Started application After you compile your Quarkus Getting Started project, you can package it in a JAR file and run it from the command line. Note The command that you use to package and run your Quarkus Getting Started application depends on the developer tool that you have installed on the machine. Prerequisites You have compiled the Quarkus Getting Started project. Procedure Go to the getting-started project directory. To package your Quarkus Getting Started project, use one of the following methods, depending on the developer tool that you intend to use: If you prefer to use Apache Maven, enter the following command: If you prefer to use the Quarkus command-line interface (CLI), enter the following command: If you prefer to use the Maven wrapper, enter the following command: This command produces the following JAR files in the /target directory: getting-started-1.0-0-SNAPSHOT.jar : Contains the classes and resources of the projects. This is the regular artifact produced by the Maven build. quarkus-app/quarkus-run.jar : Is an executable JAR file. This file is not an uber-JAR file. The dependencies are copied into the target/quarkus-app/lib directory. To start your application, enter the following command: Note Before running the application, ensure that you stop dev mode, (press CTRL+C), or you will have a port conflict. The Class-Path entry of the MANIFEST.MF file from the quarkus-run.jar file explicitly lists the JAR files from the lib directory. If you want to deploy your application from another location, you must deploy the whole quarkus-app directory. Important Various Red Hat build of Quarkus extensions contribute non-application endpoints that provide different kinds of information about the application. For example, the quarkus-smallrye-health , quarkus-smallrye-metrics , and quarkus-smallrye-openapi extensions. You can access these non-application endpoints by specifying a /q prefix. For example, /q/health , /q/metrics , /q/openapi . For non-application endpoints that might present a security risk, you can choose to expose those endpoints under a different TCP port by using a dedicated management interface. For more information, see the Quarkus Management interface reference guide. 1.10. JVM and native building modes The following section describes compiling a classic JVM application and compiling a native application with Mandrel or GraalVM's native-image tool. 1.10.1. Compiling an application as a classic JVM application You can compile your application as a JVM application. This option is based on the quarkus.package.type configuration property and generates one of the following files: fast-jar : A JAR file that is optimized for Quarkus and the default configuration option. Results in slightly faster startup times and slightly reduced memory usage. legacy-jar : A typical JAR file. uber-jar : A single standalone JAR file. These JAR files work on all operating systems and build much faster than native images. 1.10.2. Compiling an application into a native image You can compile your application into a native image. To do so, you set the quarkus.package.type configuration property to native . With this property, you create an executable binary file that is compiled specifically for an operating system of your choice, such as an .exe file for Windows. These files have faster start times and lesser RAM consumption than JAVA JAR files, but their compilation takes several minutes. In addition, the maximum throughput achievable by using a native binary is lower than a regular JVM application because the profile-guided optimizations are missing. Using Mandrel Mandrel is a specialized distribution of GraalVM for Red Hat build of Quarkus and also the recommended approach for building native executables that target Linux containerized environments. While the Mandrel approach is perfect for embedding the compilation output in a containerized environment, only a Linux64 bit native executable is provided. Therefore, an outcome such as .exe is not an option. Mandrel users are encouraged to use containers to build their native executables. To use the official Mandrel image to compile an application into native mode using a local installation of Docker or Podman, enter the mvn package command with the following properties: For information about how to build a native executable by using Mandrel, see Compiling your Red Hat build of Quarkus applications to native executables For a list of available Mandrel images, see Available Mandrel images Using GraalVM Because Mandrel does not support macOS, you can use Oracle GraalVM to build native executables on this operating system. You can also build native executables by using Oracle GraalVM directly on bare metal Linux or Windows distributions. For more information about this process, see the Oracle GraalVM README and release notes. For information about how to build a native executable by using Oracle GraalVM, see Compiling your Red Hat build of Quarkus applications to native executables . Additional resources For more information about building, compiling, packaging, and debugging a native executable, see Building a native executable . For tips to help troubleshoot issues that might occur when attempting to run Java applications as native executables, see Tips for writing native applications . 1.11. Packaging and running the Red Hat build of Quarkus Getting Started application in native mode In native mode, the output from the application builds is a platform-dependent native binary file rather than a compress or archive JAR file. For more information about how native mode differs from the JVM, see the JVM and native building modes chapter of the Getting Started guide. Prerequisites You have installed OpenJDK 11 or 17 installed and set the JAVA_HOME environment variable to specify the location of the Java SDK. You have installed Apache Maven 3.8.6 or later. You have a working C development environment . You have a working container runtime, such as Docker or Podman. Optional : If you want to use the Quarkus command-line interface (CLI), ensure that it is installed. For instructions on how to install the Quarkus CLI, refer to the community-specific information at Quarkus CLI . You have cloned and compiled the Quarkus Getting Started project . You have downloaded and installed a community or enterprise edition of GraalVM. To download and install a community or an enterprise edition of GraalVM, refer to the official Getting Started with GraalVM documentation. Alternatively, use platform-specific install tools such as sdkman , homebrew , or scoop . Note While you can use the community edition of GraalVM to complete all of the procedures in the Getting Started guide, the community edition of GraalVM is not supported in a Red Hat build of Quarkus production environment. For more information, see Compiling your Red Hat build of Quarkus applications to native executables . Procedure Configure the runtime environment by setting the GRAALVM_HOME environment variable to the GraalVM installation directory. For example: export GRAALVM_HOME=USDHOME/Development/graalvm/ On macOS, point the variable to the Home sub-directory: export GRAALVM_HOME=USDHOME/Development/graalvm/Contents/Home/ On Windows, set your environment variables by using the Control Panel. Install the native-image tool: USD{GRAALVM_HOME}/bin/gu install native-image Set the JAVA_HOME environment variable to the GraalVM installation directory: export JAVA_HOME=USD{GRAALVM_HOME} Add the GraalVM bin directory to the path: export PATH=USD{GRAALVM_HOME}/bin:USDPATH Go to the Getting Started project folder: cd getting-started Compile a native image in one of the following ways: Using Maven: mvn clean package -Pnative Using the Quarkus CLI: quarkus build --native Verification Start the application: ./target/getting-started-1.0.0-SNAPSHOT-runner Observe the log message and verify that it contains the word native : 2023-08-30 09:51:51,505 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Red Hat build of Quarkus 3.2.9.Final) started in 0.043s. Listening on: http://0.0.0.0:8080 Additional resources For additional tips or troubleshooting information, see the Quarkus Building a native executable guide. 1.12. Additional resources Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform Revised on 2024-10-10 17:19:26 UTC | [
"<!-- Configure the Red Hat build of Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>",
"<activeProfile>red-hat-enterprise-maven-repository</activeProfile>",
"<project> <properties> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.version>3.2.12.SP1-redhat-00003</quarkus.platform.version> </properties> </project>",
"registries: - registry.quarkus.redhat.com - registry.quarkus.io",
"mvn --version",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create -DprojectGroupId=org.acme -DprojectArtifactId=getting-started -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=3.2.12.SP1-redhat-00003 -DclassName=\"org.acme.quickstart.GreetingResource\" -Dpath=\"/hello\" cd getting-started",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create -DprojectGroupId=org.acme -DprojectArtifactId=getting-started -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=3.2.12.SP1-redhat-00003 -DclassName=\"org.acme.quickstart.GreetingResource\" -Dpath=\"/hello\"",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.2.12.SP1-redhat-00003:create \"-DprojectGroupId=org.acme\" \"-DprojectArtifactId=getting-started\" \"-DplatformVersion=3.2.12.SP1-redhat-00003\" \"-DplatformGroupId=com.redhat.quarkus.platform\" \"-DclassName=org.acme.quickstart.GreetingResource\" \"-Dpath=/hello\"",
"<project> <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version>3.2.12.SP1-redhat-00003</quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <build> <plugins> <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-reactive</artifactId> </dependency>",
"import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/hello\") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return \"Hello from RESTEasy Reactive\"; } }",
"cd <directory_name>",
"./mvnw quarkus:dev",
"quarkus dev",
"quarkus create && cd code-with-quarkus",
"quarkus --help",
"package org.acme; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/hello\") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return \"Hello from RESTEasy Reactive\"; } }",
"mvn quarkus:dev",
"quarkus dev",
"./mvnw quarkus:dev",
"INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, resteasy, smallrye-context-propagation]",
"curl -w \"\\n\" http://localhost:8080/hello Hello from RESTEasy Reactive",
"package org.acme.quickstart; import jakarta.enterprise.context.ApplicationScoped; @ApplicationScoped public class GreetingService { public String greeting(String name) { return \"hello \" + name; } }",
"package org.acme.quickstart; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.jboss.resteasy.annotations.jaxrs.PathParam; @Path(\"/hello\") public class GreetingResource { @Inject GreetingService service; @GET @Produces(MediaType.TEXT_PLAIN) @Path(\"/greeting/{name}\") public String greeting(@PathParam String name) { return service.greeting(name); } @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return \"Hello from RESTEasy Reactive\"; } }",
"./mvnw quarkus:dev",
"curl -w \"\\n\" http://localhost:8080/hello/greeting/quarkus hello quarkus",
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency>",
"<plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariable> </configuration> </plugin>",
"package org.acme.quickstart; import io.quarkus.test.junit.QuarkusTest; import org.junit.jupiter.api.Test; import java.util.UUID; import static io.restassured.RestAssured.given; import static org.hamcrest.CoreMatchers.is; @QuarkusTest public class GreetingResourceTest { @Test public void testHelloEndpoint() { given() .when().get(\"/hello\") .then() .statusCode(200) .body(is(\"Hello from RESTEasy Reactive\")); } @Test public void testGreetingEndpoint() { String uuid = UUID.randomUUID().toString(); given() .pathParam(\"name\", uuid) .when().get(\"/hello/greeting/{name}\") .then() .statusCode(200) .body(is(\"hello \" + uuid)); } }",
"./mvnw test",
"@TestHTTPResource(\"/myservlet\") URL testUrl;",
"./mvnw quarkus:dev",
"mvn quarkus:dev",
"quarkus dev",
"Press [e] to edit command line args (currently ''), [r] to re-run, [o] Toggle test output, [:] for the terminal, [h] for more options>",
"Press [e] to edit command line args (currently ''), [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>",
"All 2 tests are passing (0 skipped), 2 tests were run in 2094ms. Tests completed at 14:45:11. Press [e] to edit command line args (currently ''), [r] to re-run, [o] Toggle test output, [:] for the terminal, [h] for more options>",
"import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; @Path(\"/hello\") public class GreetingResource { @GET @Produces(MediaType.TEXT_PLAIN) public String hello() { return \"Hello world\"; } }",
"2023-09-08 15:03:45,911 ERROR [io.qua.test] (Test runner thread) Test GreetingResourceTest#testHelloEndpoint() failed: java.lang.AssertionError: 1 expectation failed. Response body doesn't match expectation. Expected: is \"Hello from RESTEasy Reactive\" Actual: Hello world at io.restassured.internal.ValidatableResponseOptionsImpl.body(ValidatableResponseOptionsImpl.java:238) at org.acme.quickstart.GreetingResourceTest.testHelloEndpoint(GreetingResourceTest.java:20) -- 1 test failed (1 passing, 0 skipped), 2 tests were run in 2076ms. Tests completed at 15:03:45. Press [e] to edit command line args (currently ''), [r] to re-run, [o] Toggle test output, [:] for the terminal, [h] for more options>",
"mvn package",
"quarkus build",
"./mvnw package",
"java -jar target/quarkus-app/quarkus-run.jar",
"-Dquarkus.package.type=native -Dquarkus.native.container-build=true -Dquarkus.native.builder-image=quay.io/quarkus/ubi-quarkus-mandrel:{MandrelVersion}-{JDK-ver-other}",
"export GRAALVM_HOME=USDHOME/Development/graalvm/",
"export GRAALVM_HOME=USDHOME/Development/graalvm/Contents/Home/",
"USD{GRAALVM_HOME}/bin/gu install native-image",
"export JAVA_HOME=USD{GRAALVM_HOME}",
"export PATH=USD{GRAALVM_HOME}/bin:USDPATH",
"cd getting-started",
"mvn clean package -Pnative",
"quarkus build --native",
"./target/getting-started-1.0.0-SNAPSHOT-runner",
"2023-08-30 09:51:51,505 INFO [io.quarkus] (main) getting-started 1.0.0-SNAPSHOT native (powered by Red Hat build of Quarkus 3.2.9.Final) started in 0.043s. Listening on: http://0.0.0.0:8080"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.2/html/getting_started_with_red_hat_build_of_quarkus/assembly_quarkus-getting-started_quarkus-getting-started |
Chapter 4. Certification testing | Chapter 4. Certification testing The certification testing briefs about the prerequisites for testing, understanding the certification process, and its requirements. 4.1. Prerequistes for certification testing Assisted installer component certification The corresponding RHEL server certification is successfully completed and posted. The corresponding Red Hat OpenShift Container Platform certification is successfully completed and posted. IPI component certification The corresponding RHEL server certification is successfully completed and posted. The corresponding Red Hat OpenShift Container Platform certification is successfully completed and posted. The corresponding bare metal driver is on the Supported Drivers List for the corresponding Red Hat OpenShift Container Platform release. 4.2. Certification workflow The Red Hat Bare Metal Hardware certification process includes the following requirements and steps: Figure 4.1. Red Hat OpenShift Container Platform Bare Metal Hardware Certification Process 4.3. Certification requirements Ensure you follow the respective Red Hat OpenShift Container Platform bare metal hardware Workflow Guide . Additional details for the certification requirements include: The Host Under Test (HUT) must already be RHEL certified. Additionally, the tests must run on a previously certified server, and all the tests prescribed in the test plan must be executed in a single run. If you have a failed test, take the corrective action and execute all the tests in a single run . Open a support case if necessary for guidance. | null | https://docs.redhat.com/en/documentation/red_hat_hardware_certification/2025/html/red_hat_openshift_container_platform_hardware_bare_metal_certification_policy_guide/assembly-certification-testing_rhosp-bm-pol-certification-lifecycle |
Chapter 9. Known issues | Chapter 9. Known issues This section lists the known issues for Streams for Apache Kafka 2.9 on RHEL. 9.1. JMX authentication when running in FIPS mode When running Streams for Apache Kafka in FIPS mode with JMX authentication enabled, clients may fail authentication. To work around this issue, do not enable JMX authentication while running in FIPS mode. We are investigating the issue and working to resolve it in a future release. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_rhel/known-issues-str |
Applications | Applications Red Hat Advanced Cluster Management for Kubernetes 2.12 Application management | [
"apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: sample-application-set namespace: sample-gitops-namespace spec: generators: - clusterDecisionResource: configMapRef: acm-placement labelSelector: matchLabels: cluster.open-cluster-management.io/placement: sample-application-placement requeueAfterSeconds: 180 template: metadata: name: sample-application-{{name}} spec: project: default sources: [ { repoURL: https://github.com/sampleapp/apprepo.git targetRevision: main path: sample-application } ] destination: namespace: sample-application server: \"{{server}}\" syncPolicy: syncOptions: - CreateNamespace=true - PruneLast=true - Replace=true - ApplyOutOfSyncOnly=true - Validate=false automated: prune: true allowEmpty: true selfHeal: true",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: sample-application-placement namespace: sample-gitops-namespace spec: clusterSets: - sampleclusterset",
"apiVersion: apps.open-cluster-management.io/v1alpha1 kind: SubscriptionStatus metadata: labels: apps.open-cluster-management.io/cluster: <your-managed-cluster> apps.open-cluster-management.io/hosting-subscription: <your-appsub-namespace>.<your-appsub-name> name: <your-appsub-name> namespace: <your-appsub-namespace> statuses: packages: - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" Message: <detailed error. visible only if the package fails> name: frontend namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: frontend namespace: test-ns-2 phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-master namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-master namespace: test-ns-2 phase: Deployed - apiVersion: v1 kind: Service lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-slave namespace: test-ns-2 phase: Deployed - apiVersion: apps/v1 kind: Deployment lastUpdateTime: \"2021-09-13T20:12:34Z\" name: redis-slave namespace: test-ns-2 phase: Deployed subscription: lastUpdateTime: \"2021-09-13T20:12:34Z\" phase: Deployed",
"apiVersion: apps.open-cluster-management.io/v1alpha1 kind: subscriptionReport metadata: labels: apps.open-cluster-management.io/cluster: \"true\" name: <your-managed-cluster-1> namespace: <your-managed-cluster-1> reportType: Cluster results: - result: deployed source: appsub-1-ns/appsub-1 // appsub 1 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 - result: failed source: appsub-2-ns/appsub-2 // appsub 2 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362 - result: propagationFailed source: appsub-3-ns/appsub-3 // appsub 3 to <your-managed-cluster-1> timestamp: nanos: 0 seconds: 1634137362",
"apiVersion: apps.open-cluster-management.io/v1alpha1 kind: subscriptionReport metadata: labels: apps.open-cluster-management.io/hosting-subscription: <your-appsub-namespace>.<your-appsub-name> name: <your-appsub-name> namespace: <your-appsub-namespace> reportType: Application resources: - apiVersion: v1 kind: Service name: redis-master2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: redis-master2 namespace: playback-ns-2 - apiVersion: v1 kind: Service name: redis-slave2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: redis-slave2 namespace: playback-ns-2 - apiVersion: v1 kind: Service name: frontend2 namespace: playback-ns-2 - apiVersion: apps/v1 kind: Deployment name: frontend2 namespace: playback-ns-2 results: - result: deployed source: cluster-1 //cluster 1 status timestamp: nanos: 0 seconds: 0 - result: failed source: cluster-3 //cluster 2 status timestamp: nanos: 0 seconds: 0 - result: propagationFailed source: cluster-4 //cluster 3 status timestamp: nanos: 0 seconds: 0 summary: deployed: 8 failed: 1 inProgress: 0 propagationFailed: 1 clusters: 10",
"% oc get managedclusterview -n <failing-clusternamespace> \"<app-name>-<app name>\"",
"% getAppSubStatus.sh -c <your-managed-cluster> -s <your-appsub-namespace> -n <your-appsub-name>",
"% getLastUpdateTime.sh -c <your-managed-cluster> -s <your-appsub-namespace> -n <your-appsub-name>",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: apps.open-cluster-management.io/do-not-delete: 'true' apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example apps.open-cluster-management.io/reconcile-option: merge pv.kubernetes.io/bind-completed: \"yes\"",
"apiVersion: v1 kind: Namespace metadata: name: hub-repo --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm namespace: hub-repo spec: pathname: [https://kubernetes-charts.storage.googleapis.com/] # URL references a valid chart URL. type: HelmRepo",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev namespace: ch-obj spec: type: Object storage pathname: [http://sample-ip:#####/dev] # URL is appended with the valid bucket name, which matches the channel name. secretRef: name: miniosecret gates: annotations: dev-ready: true",
"https://s3.console.aws.amazon.com/s3/buckets/sample-bucket-1 s3://sample-bucket-1/ https://sample-bucket-1.s3.amazonaws.com/",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: object-dev namespace: ch-object-dev spec: type: ObjectBucket pathname: https://s3.console.aws.amazon.com/s3/buckets/sample-bucket-1 secretRef: name: secret-dev --- apiVersion: v1 kind: Secret metadata: name: secret-dev namespace: ch-object-dev stringData: AccessKeyID: <your AWS bucket access key id> SecretAccessKey: <your AWS bucket secret access key> Region: <your AWS bucket region> type: Opaque",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: towhichcluster namespace: obj-sub-ns spec: clusterSelector: {} --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: obj-sub namespace: obj-sub-ns spec: channel: ch-object-dev/object-dev placement: placementRef: kind: PlacementRule name: towhichcluster",
"annotations: apps.open-cluster-management.io/bucket-path: <subfolder-1>",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/bucket-path: subfolder1 name: obj-sub namespace: obj-sub-ns labels: name: obj-sub spec: channel: ch-object-dev/object-dev placement: placementRef: kind: PlacementRule name: towhichcluster",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: apps.open-cluster-management.io/do-not-delete: 'true' apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example apps.open-cluster-management.io/reconcile-option: merge pv.kubernetes.io/bind-completed: \"yes\"",
"apiVersion: v1 kind: Secret metadata: name: toweraccess namespace: same-as-subscription type: Opaque stringData: token: ansible-tower-api-token host: https://ansible-tower-host-url",
"apply -f",
"apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: name: demo-job-001 namespace: default spec: tower_auth_secret: toweraccess job_template_name: Demo Job Template extra_vars: cost: 6.88 ghosts: [\"inky\",\"pinky\",\"clyde\",\"sue\"] is_enable: false other_variable: foo pacman: mrs size: 8 targets_list: - aaa - bbb - ccc version: 1.23.45 job_tags: \"provision,install,configuration\" skip_tags: \"configuration,restart\"",
"apiVersion: tower.ansible.com/v1alpha1 kind: AnsibleJob metadata: name: demo-job-001 namespace: default spec: tower_auth_secret: toweraccess workflow_template_name: Demo Workflow Template extra_vars: cost: 6.88 ghosts: [\"inky\",\"pinky\",\"clyde\",\"sue\"] is_enable: false other_variable: foo pacman: mrs size: 8 targets_list: - aaa - bbb - ccc version: 1.23.45",
"apiVersion: `image.openshift.io/v1` kind: ImageStream metadata: name: default namespace: default spec: lookupPolicy: local: true tags: - name: 'latest' from: kind: DockerImage name: 'quay.io/repository/open-cluster-management/multicluster-operators-subscription:community-latest'",
"--- apiVersion: v1 kind: Namespace metadata: name: multins --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: multins data: path: resource1 --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-2 namespace: default data: path: resource2 --- apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-3 data: path: resource3",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: subscription-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: merge apps.open-cluster-management.io/current-namespace-scoped: \"true\" spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: name: user1 age: 19",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: age: 20",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: merge spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: name: user1 age: 20",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: mergeAndOwn spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns annotations: apps.open-cluster-management.io/hosting-subscription: sub-ns/subscription-example data: name: user1 age: 20",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: subscription-example namespace: sub-ns annotations: apps.open-cluster-management.io/git-path: sample-resources apps.open-cluster-management.io/reconcile-option: replace spec: channel: channel-ns/somechannel placement: placementRef: name: dev-clusters",
"apiVersion: v1 kind: ConfigMap metadata: name: test-configmap-1 namespace: sub-ns data: age: 20",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-branch: <branch1>",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-desired-commit: <full commit number> apps.open-cluster-management.io/git-clone-depth: 100",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-mongodb-subscription annotations: apps.open-cluster-management.io/git-path: stable/ibm-mongodb-dev apps.open-cluster-management.io/git-tag: <v1.0> apps.open-cluster-management.io/git-clone-depth: 100",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: open-cluster-management:subscription-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: open-cluster-management:subscription-admin",
"edit clusterrolebinding open-cluster-management:subscription-admin",
"subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: example-name - apiGroup: rbac.authorization.k8s.io kind: Group name: example-group-name - kind: ServiceAccount name: my-service-account namespace: my-service-account-namespace - apiGroup: rbac.authorization.k8s.io kind: User name: 'system:serviceaccount:my-service-account-namespace:my-service-account'",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/github-path: sub2 name: demo-subscription namespace: demo-ns spec: channel: demo-ns/somechannel allow: - apiVersion: policy.open-cluster-management.io/v1 kinds: - Policy - apiVersion: v1 kinds: - Deployment deny: - apiVersion: v1 kinds: - Service - ConfigMap placement: local: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: annotations: apps.open-cluster-management.io/github-path: myapplication name: demo-subscription namespace: demo-ns spec: channel: demo-ns/somechannel deny: - apiVersion: v1 kinds: - Service - ConfigMap placement: placementRef: name: demo-placement kind: Placement",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: git-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: <value from the list> spec: type: GitHub pathname: <Git URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-subscription annotations: apps.open-cluster-management.io/git-path: <application1> apps.open-cluster-management.io/git-branch: <branch1> spec: channel: sample/git-channel placement: local: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: git-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: GitHub pathname: <Git URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: git-subscription annotations: apps.open-cluster-management.io/git-path: application1 apps.open-cluster-management.io/git-branch: branch1 apps.open-cluster-management.io/reconcile-rate: \"off\" spec: channel: sample/git-channel placement: local: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: low spec: type: HelmRepo pathname: <Helm repo URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: helm-subscription spec: channel: sample/helm-channel name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: nginx-ingress-simple packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: helm-channel namespace: sample annotations: apps.open-cluster-management.io/reconcile-rate: high spec: type: HelmRepo pathname: <Helm repo URL> --- apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: helm-subscription annotations: apps.open-cluster-management.io/reconcile-rate: \"off\" spec: channel: sample/helm-channel name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: nginx-ingress-simple packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: true",
"annotate mch -n open-cluster-management multiclusterhub mch-pause=true --overwrite=true",
"edit deployment -n open-cluster-management multicluster-operators-hub-subscription",
"annotate mch -n open-cluster-management multiclusterhub mch-pause=false --overwrite=true",
"command: - /usr/local/bin/multicluster-operators-subscription - --sync-interval=60 - --retry-period=52",
"apiVersion: v1 kind: Secret metadata: name: my-git-secret namespace: channel-ns data: user: dXNlcgo= accessToken: cGFzc3dvcmQK",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: sample-channel namespace: channel-ns spec: type: Git pathname: <Git HTTPS URL> secretRef: name: my-git-secret",
"x509: certificate is valid for localhost.com, not localhost",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: labels: name: sample-channel namespace: sample spec: type: GitHub pathname: <Git HTTPS URL> insecureSkipVerify: true",
"apiVersion: v1 kind: ConfigMap metadata: name: git-ca namespace: channel-ns data: caCerts: | # Git server root CA -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE----- # Git server intermediate CA 1 -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE----- # Git server intermediate CA 2 -----BEGIN CERTIFICATE----- MIIF5DCCA8wCCQDInYMol7LSDTANBgkqhkiG9w0BAQsFADCBszELMAkGA1UEBhMC Q0ExCzAJBgNVBAgMAk9OMRAwDgYDVQQHDAdUb3JvbnRvMQ8wDQYDVQQKDAZSZWRI YXQxDDAKBgNVBAsMA0FDTTFFMEMGA1UEAww8Z29ncy1zdmMtZGVmYXVsdC5hcHBz LnJqdW5nLWh1YjEzLmRldjA2LnJlZC1jaGVzdGVyZmllbGQuY29tMR8wHQYJKoZI hvcNAQkBFhByb2tlakByZWRoYXQuY29tMB4XDTIwMTIwMzE4NTMxMloXDTIzMDky MzE4NTMxMlowgbMxCzAJBgNVBAYTAkNBMQswCQYDVQQIDAJPTjEQMA4GA1UEBwwH VG9yb250bzEPMA0GA1UECgwGUmVkSGF0MQwwCgYDVQQLDANBQ00xRTBDBgNVBAMM PGdvZ3Mtc3ZjLWRlZmF1bHQuYXBwcy5yanVuZy1odWIxMy5kZXYwNi5yZWQtY2hl c3RlcmZpZWxkLmNvbTEfMB0GCSqGSIb3DQEJARYQcm9rZWpAcmVkaGF0LmNvbTCC AiIwDQYJKoZIhvcNAQEBBQADggIPADCCAgoCggIBAM3nPK4mOQzaDAo6S3ZJ0Ic3 U9p/NLodnoTIC+cn0q8qNCAjf13zbGB3bfN9Zxl8Q5fv+wYwHrUOReCp6U/InyQy 6OS3gj738F635inz1KdyhKtlWW2p9Ye9DUtx1IlfHkDVdXtynjHQbsFNIdRHcpQP upM5pwPC3BZXqvXChhlfAy2m4yu7vy0hO/oTzWIwNsoL5xt0Lw4mSyhlEip/t8lU xn2y8qhm7MiIUpXuwWhSYgCrEVqmTcB70Pc2YRZdSFolMN9Et70MjQN0TXjoktH8 PyASJIKIRd+48yROIbUn8rj4aYYBsJuoSCjJNwujZPbqseqUr42+v+Qp2bBj1Sjw +SEZfHTvSv8AqX0T6eo6njr578+DgYlwsS1A1zcAdzp8qmDGqvJDzwcnQVFmvaoM gGHCdJihfy3vDhxuZRDse0V4Pz6tl6iklM+tHrJL/bdL0NdfJXNCqn2nKrM51fpw diNXs4Zn3QSStC2x2hKnK+Q1rwCSEg/lBawgxGUslTboFH77a+Kwu4Oug9ibtm5z ISs/JY4Kiy4C2XJOltOR2XZYkdKaX4x3ctbrGaD8Bj+QHiSAxaaSXIX+VbzkHF2N aD5ijFUopjQEKFrYh3O93DB/URIQ+wHVa6+Kvu3uqE0cg6pQsLpbFVQ/I8xHvt9L kYy6z6V/nj9ZYKQbq/kPAgMBAAEwDQYJKoZIhvcNAQELBQADggIBAKZuc+lewYAv jaaSeRDRoToTb/yN0Xsi69UfK0aBdvhCa7/0rPHcv8hmUBH3YgkZ+CSA5ygajtL4 g2E8CwIO9ZjZ6l+pHCuqmNYoX1wdjaaDXlpwk8hGTSgy1LsOoYrC5ZysCi9Jilu9 PQVGs/vehQRqLV9uZBigG6oZqdUqEimaLHrOcEAHB5RVcnFurz0qNbT+UySjsD63 9yJdCeQbeKAR9SC4hG13EbM/RZh0lgFupkmGts7QYULzT+oA0cCJpPLQl6m6qGyE kh9aBB7FLykK1TeXVuANlNU4EMyJ/e+uhNkS9ubNJ3vuRuo+ECHsha058yi16JC9 NkZqP+df4Hp85sd+xhrgYieq7QGX2KOXAjqAWo9htoBhOyW3mm783A7WcOiBMQv0 2UGZxMsRjlP6UqB08LsV5ZBAefElR344sokJR1de/Sx2J9J/am7yOoqbtKpQotIA XSUkATuuQw4ctyZLDkUpzrDzgd2Bt+aawF6sD2YqycaGFwv2YD9t1YlD6F4Wh8Mc 20Qu5EGrkQTCWZ9pOHNSa7YQdmJzwbxJC4hqBpBRAJFI2fAIqFtyum6/8ZN9nZ9K FSEKdlu+xeb6Y6xYt0mJJWF6mCRi4i7IL74EU/VNXwFmfP6IadliUOST3w5t92cB M26t73UCExXMXTCQvnp0ki84PeR1kRk4 -----END CERTIFICATE-----",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: configMapRef: name: git-ca pathname: <Git HTTPS URL> type: Git",
"apiVersion: v1 kind: Secret metadata: name: git-ssh-key namespace: channel-ns data: sshKey: LS0tLS1CRUdJTiBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0KYjNCbGJuTnphQzFyWlhrdGRqRUFBQUFBQ21GbGN6STFOaTFqZEhJQUFBQUdZbU55ZVhCMEFBQUFHQUFBQUJDK3YySHhWSIwCm8zejh1endzV3NWODMvSFVkOEtGeVBmWk5OeE5TQUgcFA3Yk1yR2tlRFFPd3J6MGIKOUlRM0tKVXQzWEE0Zmd6NVlrVFVhcTJsZWxxVk1HcXI2WHF2UVJ5Mkc0NkRlRVlYUGpabVZMcGVuaGtRYU5HYmpaMmZOdQpWUGpiOVhZRmd4bTNnYUpJU3BNeTFLWjQ5MzJvOFByaDZEdzRYVUF1a28wZGdBaDdndVpPaE53b0pVYnNmYlZRc0xMS1RrCnQwblZ1anRvd2NEVGx4TlpIUjcwbGVUSHdGQTYwekM0elpMNkRPc3RMYjV2LzZhMjFHRlMwVmVXQ3YvMlpMOE1sbjVUZWwKSytoUWtxRnJBL3BUc1ozVXNjSG1GUi9PV25FPQotLS0tLUVORCBPUEVOU1NIIFBSSVZBVEUgS0VZLS0tLS0K passphrase: cGFzc3cwcmQK type: Opaque",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: secretRef: name: git-ssh-key pathname: <Git SSH URL> type: Git",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: my-channel namespace: channel-ns spec: secretRef: name: git-ssh-key pathname: <Git SSH URL> type: Git insecureSkipVerify: true",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 spec: watchHelmNamespaceScopedResources: true channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\"",
"packageOverrides: - packageName: nginx-ingress packageOverrides: - path: spec value: my-override-values 1",
"packageOverrides: - packageName: nginx-ingress packageAlias: my-helm-release-name",
"apply -f filename.yaml",
"get application.app",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: namespace: # Each channel needs a unique namespace, except Git channel. spec: sourceNamespaces: type: pathname: secretRef: name: gates: annotations: labels:",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: dev namespace: ch-obj spec: type: ObjectBucket pathname: [http://9.28.236.243:xxxx/dev] # URL is appended with the valid bucket name, which matches the channel name. secretRef: name: miniosecret gates: annotations: dev-ready: true",
"apiVersion: v1 kind: Namespace metadata: name: hub-repo --- apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: Helm namespace: hub-repo spec: pathname: [https://9.21.107.150:8443/helm-repo/charts] # URL references a valid chart URL. insecureSkipVerify: true type: HelmRepo",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: predev-ch namespace: ns-ch labels: app: nginx-app-details spec: type: HelmRepo pathname: https://kubernetes-charts.storage.googleapis.com/",
"apiVersion: apps.open-cluster-management.io/v1 kind: Channel metadata: name: hive-cluster-gitrepo namespace: gitops-cluster-lifecycle spec: type: Git pathname: https://github.com/open-cluster-management/gitops-clusters.git secretRef: name: github-gitops-clusters --- apiVersion: v1 kind: Secret metadata: name: github-gitops-clusters namespace: gitops-cluster-lifecycle data: user: dXNlcgo= # Value of user and accessToken is Base 64 coded. accessToken: cGFzc3dvcmQ",
"apply -f filename.yaml",
"get application.app",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: namespace: labels: spec: sourceNamespace: source: channel: name: packageFilter: version: labelSelector: matchLabels: package: component: annotations: packageOverrides: - packageName: packageAlias: - path: value: placement: local: clusters: name: clusterSelector: placementRef: name: kind: Placement overrides: clusterName: clusterOverrides: path: value:",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: placementRef: kind: Placement name: towhichcluster overrides: - clusterName: \"/\" clusterOverrides: - path: \"metadata.namespace\" value: default",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch secondaryChannel: ns-ch-2/predev-ch-2 name: nginx-ingress",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: placementRef: kind: Placement name: towhichcluster timewindow: windowtype: \"active\" location: \"America/Los_Angeles\" daysofweek: [\"Monday\", \"Wednesday\", \"Friday\"] hours: - start: \"10:20AM\" end: \"10:30AM\" - start: \"12:40PM\" end: \"1:40PM\"",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: simple namespace: default spec: channel: ns-ch/predev-ch name: nginx-ingress packageOverrides: - packageName: nginx-ingress packageAlias: my-nginx-ingress-releaseName packageOverrides: - path: spec value: defaultBackend: replicaCount: 3 placement: local: false",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: nginx namespace: ns-sub-1 labels: app: nginx-app-details spec: channel: ns-ch/predev-ch name: nginx-ingress packageFilter: version: \"1.36.x\" placement: clusters: - name: my-development-cluster-1 packageOverrides: - packageName: my-server-integration-prod packageOverrides: - path: spec value: persistence: enabled: false useDynamicProvisioning: false license: accept tls: hostname: my-mcm-cluster.icp sso: registrationImage: pullSecret: hub-repo-docker-secret",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: sample-subscription namespace: default annotations: apps.open-cluster-management.io/git-path: sample_app_1/dir1 apps.open-cluster-management.io/git-branch: branch1 spec: channel: default/sample-channel placement: placementRef: kind: Placement name: dev-clusters",
"apiVersion: apps.open-cluster-management.io/v1 kind: Subscription metadata: name: example-subscription namespace: default spec: channel: some/channel packageOverrides: - packageName: kustomization packageOverrides: - value: | patchesStrategicMerge: - patch.yaml",
"create route passthrough --service=multicluster-operators-subscription -n open-cluster-management",
"apiVersion: v1 kind: Secret metadata: name: my-github-webhook-secret data: secret: BASE64_ENCODED_SECRET",
"annotate channel.apps.open-cluster-management.io <channel name> apps.open-cluster-management.io/webhook-enabled=\"true\"",
"annotate channel.apps.open-cluster-management.io <channel name> apps.open-cluster-management.io/webhook-secret=\"<the_secret_name>\"",
"apply -f filename.yaml",
"get application.app",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: namespace: resourceVersion: labels: app: chart: release: heritage: selfLink: uid: spec: clusterSelector: matchLabels: datacenter: environment: clusterReplicas: clusterConditions: ResourceHint: type: order: Policies:",
"status: decisions: clusterName: clusterNamespace:",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: gbapp-gbapp namespace: development labels: app: gbapp spec: clusterSelector: matchLabels: environment: Dev clusterReplicas: 1 status: decisions: - clusterName: local-cluster clusterNamespace: local-cluster",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: towhichcluster namespace: ns-sub-1 labels: app: nginx-app-details spec: clusterReplicas: 1 clusterConditions: - type: ManagedClusterConditionAvailable status: \"True\" clusterSelector: matchExpressions: - key: environment operator: In values: - dev",
"apply -f filename.yaml",
"get application.app",
"apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: namespace: spec: selector: matchLabels: label_name: label_value",
"apiVersion: app.k8s.io/v1beta1 kind: Application metadata: name: my-application namespace: my-namespace spec: selector: matchLabels: my-label: my-label-value"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/applications/index |
20.2. General Changes In Internationalization | 20.2. General Changes In Internationalization New yum-langpacks Plug-In A new Yum plug-in, yum-langpacks enables users to install translation subpackages for various packages for the current language locale. This plug-in also provides Yum commands that show available languages support, display the list of installed languages, allow users to install new languages, remove installed languages and also show which packages will be installed when the user wants to install new language support. These changes can be illustrated by the following example: To install language packs for the Marathi or Czech languages in Red Hat Enterprise Linux 6, run: To install language packs for the Marathi or Czech languages in Red Hat Enterprise Linux 7, run: Please refer to the yum-langpacks(8) man page for more information. Changing Locale and Keyboard Layout Settings localectl is a new utility used to query and change the system locale and keyboard layout settings; the settings are used in text consoles and inherited by desktop environments. localectl also accepts a hostname argument to administer remote systems over SSH. | [
"]# yum groupinstall marathi-support ~]# yum groupinstall czech-support",
"~]# yum langinstall mr ~]# yum langinstall cs"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-internationalization-general_changes_in_internationalization |
Appendix A. Using your subscription | Appendix A. Using your subscription Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired Streams for Apache Kafka product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2025-03-19 12:54:27 UTC | [
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/using_your_subscription |
3.8. Changing Default User Configuration | 3.8. Changing Default User Configuration The realmd system supports modifying the default user home directory and shell POSIX attributes. For example, this might be required when some POSIX attributes are not set in the Windows user accounts or when these attributes are different from POSIX attributes of other users on the local system. Important Changing the configuration as described in this section only works if the realm join command has not been run yet. If a system is already joined, change the default home directory and shell in the /etc/sssd/sssd.conf file, as described in the section called "Optional: Configure User Home Directories and Shells" . To override the default home directory and shell POSIX attributes, specify the following options in the [users] section in the /etc/realmd.conf file: default-home The default-home option sets a template for creating a home directory for accounts that have no home directory explicitly set. A common format is /home/%d/%u , where %d is the domain name and %u is the user name. default-shell The default-shell option defines the default user shell. It accepts any supported system shell. For example: For more information about the options, see the realmd.conf (5) man page. | [
"[users] default-home = /home/%u default-shell = /bin/bash"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/config-realmd-users |
Chapter 1. Overview of load balancing in Satellite | Chapter 1. Overview of load balancing in Satellite You can configure your Satellite environment to use a load balancer to distribute host requests and network load across multiple Capsule Servers. This results in an improved performance on Capsule Servers and improved performance and stability for host connections to Satellite. In a load-balanced setup, Capsule functionality supported for load balancing continues to work as expected when one Capsule Server is down for planned or unplanned maintenance. 1.1. Components of a load-balanced setup A load-balanced setup in a Satellite environment consists of the following components: Satellite Server Two or more Capsule Servers A load balancer Multiple hosts A host sends a request to the TCP load balancer. The load balancer receives the request and determines which Capsule Server will handle the request to ensure optimal performance and availability. Figure 1.1. Components of a load-balanced setup 1.2. Services and features supported in a load-balanced setup A load balancer in Satellite distributes load only for the following services and features: Registering hosts Providing content to hosts Configuring hosts by using Puppet Other Satellite services, such as provisioning, virt-who , or remote execution, go directly through the individual Capsules on which these services are running. 1.3. Additional maintenance required for load balancing Configuring Capsules to use a load balancer results in a more complex environment and requires additional maintenance. The following additional steps are required for load balancing: You must ensure that all Capsules have the same content. If you publish a content view version on Satellite, synchronize it to all Capsule Servers. You must upgrade each Capsule in sequence. | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_capsules_with_a_load_balancer/overview-of-load-balancing-in-project_load-balancing |
Chapter 77. subnet | Chapter 77. subnet This chapter describes the commands under the subnet command. 77.1. subnet create Create a subnet Usage: Table 77.1. Positional Arguments Value Summary <name> New subnet name Table 77.2. Optional Arguments Value Summary -h, --help Show this help message and exit --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --subnet-pool <subnet-pool> Subnet pool from which this subnet will obtain a cidr (Name or ID) --use-prefix-delegation USE_PREFIX_DELEGATION Use prefix-delegation if ip is ipv6 format and ip would be delegated externally --use-default-subnet-pool Use default subnet pool for --ip-version --prefix-length <prefix-length> Prefix length for subnet allocation from subnet pool --subnet-range <subnet-range> Subnet range in cidr notation (required if --subnet- pool is not specified, optional otherwise) --dhcp Enable dhcp (default) --no-dhcp Disable dhcp --gateway <gateway> Specify a gateway for the subnet. the three options are: <ip-address>: Specific IP address to use as the gateway, auto : Gateway address should automatically be chosen from within the subnet itself, none : This subnet will not use a gateway, e.g.: --gateway 192.168.9.1, --gateway auto, --gateway none (default is auto ). --ip-version {4,6} Ip version (default is 4). note that when subnet pool is specified, IP version is determined from the subnet pool and this option is ignored. --ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac} Ipv6 ra (router advertisement) mode, valid modes: [dhcpv6-stateful, dhcpv6-stateless, slaac] --ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac} Ipv6 address mode, valid modes: [dhcpv6-stateful, dhcpv6-stateless, slaac] --network-segment <network-segment> Network segment to associate with this subnet (name or ID) --network <network> Network this subnet belongs to (name or id) --description <description> Set subnet description --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses for this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to add multiple IP addresses) --dns-nameserver <dns-nameserver> Dns server for this subnet (repeat option to set multiple DNS servers) --host-route destination=<subnet>,gateway=<ip-address> Additional route for this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to add multiple routes) --service-type <service-type> Service type for this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to set multiple service types) --tag <tag> Tag to be added to the subnet (repeat option to set multiple tags) --no-tag No tags associated with the subnet Table 77.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 77.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 77.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.2. subnet delete Delete subnet(s) Usage: Table 77.7. Positional Arguments Value Summary <subnet> Subnet(s) to delete (name or id) Table 77.8. Optional Arguments Value Summary -h, --help Show this help message and exit 77.3. subnet list List subnets Usage: Table 77.9. Optional Arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --ip-version <ip-version> List only subnets of given ip version in output. Allowed values for IP version are 4 and 6. --dhcp List subnets which have dhcp enabled --no-dhcp List subnets which have dhcp disabled --service-type <service-type> List only subnets of a given service type in output e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to list multiple service types) --project <project> List only subnets which belong to a given project in output (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --network <network> List only subnets which belong to a given network in output (name or ID) --gateway <gateway> List only subnets of given gateway ip in output --name <name> List only subnets of given name in output --subnet-range <subnet-range> List only subnets of given subnet range (in cidr notation) in output e.g.: --subnet-range 10.10.0.0/16 --tags <tag>[,<tag>,... ] List subnets which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List subnets which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude subnets which have all given tag(s) (comma- separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude subnets which have any given tag(s) (comma- separated list of tags) Table 77.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 77.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 77.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.4. subnet pool create Create subnet pool Usage: Table 77.14. Positional Arguments Value Summary <name> Name of the new subnet pool Table 77.15. Optional Arguments Value Summary -h, --help Show this help message and exit --pool-prefix <pool-prefix> Set subnet pool prefixes (in cidr notation) (repeat option to set multiple prefixes) --default-prefix-length <default-prefix-length> Set subnet pool default prefix length --min-prefix-length <min-prefix-length> Set subnet pool minimum prefix length --max-prefix-length <max-prefix-length> Set subnet pool maximum prefix length --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --address-scope <address-scope> Set address scope associated with the subnet pool (name or ID), prefixes must be unique across address scopes --default Set this as a default subnet pool --no-default Set this as a non-default subnet pool --share Set this subnet pool as shared --no-share Set this subnet pool as not shared --description <description> Set subnet pool description --default-quota <num-ip-addresses> Set default per-project quota for this subnet pool as the number of IP addresses that can be allocated from the subnet pool --tag <tag> Tag to be added to the subnet pool (repeat option to set multiple tags) --no-tag No tags associated with the subnet pool Table 77.16. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 77.17. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 77.18. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.19. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.5. subnet pool delete Delete subnet pool(s) Usage: Table 77.20. Positional Arguments Value Summary <subnet-pool> Subnet pool(s) to delete (name or id) Table 77.21. Optional Arguments Value Summary -h, --help Show this help message and exit 77.6. subnet pool list List subnet pools Usage: Table 77.22. Optional Arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output --share List subnet pools shared between projects --no-share List subnet pools not shared between projects --default List subnet pools used as the default external subnet pool --no-default List subnet pools not used as the default external subnet pool --project <project> List subnet pools according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --name <name> List only subnet pools of given name in output --address-scope <address-scope> List only subnet pools of given address scope in output (name or ID) --tags <tag>[,<tag>,... ] List subnet pools which have all given tag(s) (comma- separated list of tags) --any-tags <tag>[,<tag>,... ] List subnet pools which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude subnet pools which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude subnet pools which have any given tag(s) (Comma-separated list of tags) Table 77.23. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 77.24. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 77.25. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 77.26. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.7. subnet pool set Set subnet pool properties Usage: Table 77.27. Positional Arguments Value Summary <subnet-pool> Subnet pool to modify (name or id) Table 77.28. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> Set subnet pool name --pool-prefix <pool-prefix> Set subnet pool prefixes (in cidr notation) (repeat option to set multiple prefixes) --default-prefix-length <default-prefix-length> Set subnet pool default prefix length --min-prefix-length <min-prefix-length> Set subnet pool minimum prefix length --max-prefix-length <max-prefix-length> Set subnet pool maximum prefix length --address-scope <address-scope> Set address scope associated with the subnet pool (name or ID), prefixes must be unique across address scopes --no-address-scope Remove address scope associated with the subnet pool --default Set this as a default subnet pool --no-default Set this as a non-default subnet pool --description <description> Set subnet pool description --default-quota <num-ip-addresses> Set default per-project quota for this subnet pool as the number of IP addresses that can be allocated from the subnet pool --tag <tag> Tag to be added to the subnet pool (repeat option to set multiple tags) --no-tag Clear tags associated with the subnet pool. specify both --tag and --no-tag to overwrite current tags 77.8. subnet pool show Display subnet pool details Usage: Table 77.29. Positional Arguments Value Summary <subnet-pool> Subnet pool to display (name or id) Table 77.30. Optional Arguments Value Summary -h, --help Show this help message and exit Table 77.31. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 77.32. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 77.33. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.34. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.9. subnet pool unset Unset subnet pool properties Usage: Table 77.35. Positional Arguments Value Summary <subnet-pool> Subnet pool to modify (name or id) Table 77.36. Optional Arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag to be removed from the subnet pool (repeat option to remove multiple tags) --all-tag Clear all tags associated with the subnet pool 77.10. subnet set Set subnet properties Usage: Table 77.37. Positional Arguments Value Summary <subnet> Subnet to modify (name or id) Table 77.38. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> Updated name of the subnet --dhcp Enable dhcp --no-dhcp Disable dhcp --gateway <gateway> Specify a gateway for the subnet. the options are: <ip-address>: Specific IP address to use as the gateway, none : This subnet will not use a gateway, e.g.: --gateway 192.168.9.1, --gateway none. --network-segment <network-segment> Network segment to associate with this subnet (name or ID). It is only allowed to set the segment if the current value is None , the network must also have only one segment and only one subnet can exist on the network. --description <description> Set subnet description --tag <tag> Tag to be added to the subnet (repeat option to set multiple tags) --no-tag Clear tags associated with the subnet. specify both --tag and --no-tag to overwrite current tags --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses for this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to add multiple IP addresses) --no-allocation-pool Clear associated allocation-pools from the subnet. Specify both --allocation-pool and --no-allocation- pool to overwrite the current allocation pool information. --dns-nameserver <dns-nameserver> Dns server for this subnet (repeat option to set multiple DNS servers) --no-dns-nameservers Clear existing information of dns nameservers. specify both --dns-nameserver and --no-dns-nameserver to overwrite the current DNS Nameserver information. --host-route destination=<subnet>,gateway=<ip-address> Additional route for this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to add multiple routes) --no-host-route Clear associated host-routes from the subnet. specify both --host-route and --no-host-route to overwrite the current host route information. --service-type <service-type> Service type for this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to set multiple service types) 77.11. subnet show Display subnet details Usage: Table 77.39. Positional Arguments Value Summary <subnet> Subnet to display (name or id) Table 77.40. Optional Arguments Value Summary -h, --help Show this help message and exit Table 77.41. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 77.42. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 77.43. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 77.44. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 77.12. subnet unset Unset subnet properties Usage: Table 77.45. Positional Arguments Value Summary <subnet> Subnet to modify (name or id) Table 77.46. Optional Arguments Value Summary -h, --help Show this help message and exit --allocation-pool start=<ip-address>,end=<ip-address> Allocation pool ip addresses to be removed from this subnet e.g.: start=192.168.199.2,end=192.168.199.254 (repeat option to unset multiple allocation pools) --dns-nameserver <dns-nameserver> Dns server to be removed from this subnet (repeat option to unset multiple DNS servers) --host-route destination=<subnet>,gateway=<ip-address> Route to be removed from this subnet e.g.: destination=10.10.0.0/16,gateway=192.168.71.254 destination: destination subnet (in CIDR notation) gateway: nexthop IP address (repeat option to unset multiple host routes) --service-type <service-type> Service type to be removed from this subnet e.g.: network:floatingip_agent_gateway. Must be a valid device owner value for a network port (repeat option to unset multiple service types) --tag <tag> Tag to be removed from the subnet (repeat option to remove multiple tags) --all-tag Clear all tags associated with the subnet | [
"openstack subnet create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--project <project>] [--project-domain <project-domain>] [--subnet-pool <subnet-pool> | --use-prefix-delegation USE_PREFIX_DELEGATION | --use-default-subnet-pool] [--prefix-length <prefix-length>] [--subnet-range <subnet-range>] [--dhcp | --no-dhcp] [--gateway <gateway>] [--ip-version {4,6}] [--ipv6-ra-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}] [--ipv6-address-mode {dhcpv6-stateful,dhcpv6-stateless,slaac}] [--network-segment <network-segment>] --network <network> [--description <description>] [--allocation-pool start=<ip-address>,end=<ip-address>] [--dns-nameserver <dns-nameserver>] [--host-route destination=<subnet>,gateway=<ip-address>] [--service-type <service-type>] [--tag <tag> | --no-tag] <name>",
"openstack subnet delete [-h] <subnet> [<subnet> ...]",
"openstack subnet list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long] [--ip-version <ip-version>] [--dhcp | --no-dhcp] [--service-type <service-type>] [--project <project>] [--project-domain <project-domain>] [--network <network>] [--gateway <gateway>] [--name <name>] [--subnet-range <subnet-range>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]",
"openstack subnet pool create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --pool-prefix <pool-prefix> [--default-prefix-length <default-prefix-length>] [--min-prefix-length <min-prefix-length>] [--max-prefix-length <max-prefix-length>] [--project <project>] [--project-domain <project-domain>] [--address-scope <address-scope>] [--default | --no-default] [--share | --no-share] [--description <description>] [--default-quota <num-ip-addresses>] [--tag <tag> | --no-tag] <name>",
"openstack subnet pool delete [-h] <subnet-pool> [<subnet-pool> ...]",
"openstack subnet pool list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long] [--share | --no-share] [--default | --no-default] [--project <project>] [--project-domain <project-domain>] [--name <name>] [--address-scope <address-scope>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]",
"openstack subnet pool set [-h] [--name <name>] [--pool-prefix <pool-prefix>] [--default-prefix-length <default-prefix-length>] [--min-prefix-length <min-prefix-length>] [--max-prefix-length <max-prefix-length>] [--address-scope <address-scope> | --no-address-scope] [--default | --no-default] [--description <description>] [--default-quota <num-ip-addresses>] [--tag <tag>] [--no-tag] <subnet-pool>",
"openstack subnet pool show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <subnet-pool>",
"openstack subnet pool unset [-h] [--tag <tag> | --all-tag] <subnet-pool>",
"openstack subnet set [-h] [--name <name>] [--dhcp | --no-dhcp] [--gateway <gateway>] [--network-segment <network-segment>] [--description <description>] [--tag <tag>] [--no-tag] [--allocation-pool start=<ip-address>,end=<ip-address>] [--no-allocation-pool] [--dns-nameserver <dns-nameserver>] [--no-dns-nameservers] [--host-route destination=<subnet>,gateway=<ip-address>] [--no-host-route] [--service-type <service-type>] <subnet>",
"openstack subnet show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <subnet>",
"openstack subnet unset [-h] [--allocation-pool start=<ip-address>,end=<ip-address>] [--dns-nameserver <dns-nameserver>] [--host-route destination=<subnet>,gateway=<ip-address>] [--service-type <service-type>] [--tag <tag> | --all-tag] <subnet>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/subnet |
10.4. Cluster Daemon crashes | 10.4. Cluster Daemon crashes RGManager has a watchdog process that reboots the host if the main rgmanager process fails unexpectedly. This causes the cluster node to get fenced and rgmanager to recover the service on another host. When the watchdog daemon detects that the main rgmanager process has crashed then it will reboot the cluster node, and the active cluster nodes will detect that the cluster node has left and evict it from the cluster. The lower number process ID (PID) is the watchdog process that takes action if its child (the process with the higher PID number) crashes. Capturing the core of the process with the higher PID number using gcore can aid in troubleshooting a crashed daemon. Install the packages that are required to capture and view the core, and ensure that both the rgmanager and rgmanager-debuginfo are the same version or the captured application core might be unusable. 10.4.1. Capturing the rgmanager Core at Runtime There are two rgmanager processes that are running as it is started. You must capture the core for the rgmanager process with the higher PID. The following is an example output from the ps command showing two processes for rgmanager . In the following example, the pidof program is used to automatically determine the higher-numbered pid, which is the appropriate pid to create the core. The full command captures the application core for the process 22483 which has the higher pid number. | [
"yum -y --enablerepo=rhel-debuginfo install gdb rgmanager-debuginfo",
"ps aux | grep rgmanager | grep -v grep root 22482 0.0 0.5 23544 5136 ? S<Ls Dec01 0:00 rgmanager root 22483 0.0 0.2 78372 2060 ? S<l Dec01 0:47 rgmanager",
"gcore -o /tmp/rgmanager-USD(date '+%F_%s').core USD(pidof -s rgmanager)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-clustcrash-CA |
Chapter 14. Scaling Multicloud Object Gateway performance | Chapter 14. Scaling Multicloud Object Gateway performance The Multicloud Object Gateway (MCG) performance may vary from one environment to another. In some cases, specific applications require faster performance which can be easily addressed by scaling S3 endpoints. The MCG resource pool is a group of NooBaa daemon containers that provide two types of services enabled by default: Storage service S3 endpoint service S3 endpoint service The S3 endpoint is a service that every Multicloud Object Gateway (MCG) provides by default that handles the heavy lifting data digestion in the MCG. The endpoint service handles the inline data chunking, deduplication, compression, and encryption, and it accepts data placement instructions from the MCG. 14.1. Automatic scaling of MultiCloud Object Gateway endpoints The number of MultiCloud Object Gateway (MCG) endpoints scale automatically when the load on the MCG S3 service increases or decreases. OpenShift Data Foundation clusters are deployed with one active MCG endpoint. Each MCG endpoint pod is configured by default with 1 CPU and 2Gi memory request, with limits matching the request. When the CPU load on the endpoint crosses over an 80% usage threshold for a consistent period of time, a second endpoint is deployed lowering the load on the first endpoint. When the average CPU load on both endpoints falls below the 80% threshold for a consistent period of time, one of the endpoints is deleted. This feature improves performance and serviceability of the MCG. You can scale the Horizontal Pod Autoscaler (HPA) for noobaa-endpoint using the following oc patch command, for example: The example above sets the minCount to 3 and the maxCount to `10 . 14.2. Increasing CPU and memory for PV pool resources MCG default configuration supports low resource consumption. However, when you need to increase CPU and memory to accommodate specific workloads and to increase MCG performance for the workloads, you can configure the required values for CPU and memory in the OpenShift Web Console. Procedure In the OpenShift Web Console, navigate to Storage Object Storage Backing Store . Select the relevant backing store and click on YAML. Scroll down until you find spec: and update pvPool with CPU and memory. Add a new property of limits and then add cpu and memory. Example reference: Click Save . Verification steps To verfiy, you can check the resource values of the PV pool pods. | [
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"multiCloudGateway\": {\"endpoints\": {\"minCount\": 3,\"maxCount\": 10}}}}'",
"spec: pvPool: resources: limits: cpu: 1000m memory: 4000Mi requests: cpu: 800m memory: 800Mi storage: 50Gi"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_hybrid_and_multicloud_resources/scaling-multicloud-object-gateway-performance-by-adding-endpoints__rhodf |
Chapter 10. ClusterInitService | Chapter 10. ClusterInitService 10.1. GetCAConfig GET /v1/cluster-init/ca-config 10.1.1. Description 10.1.2. Parameters 10.1.3. Return Type V1GetCAConfigResponse 10.1.4. Content Type application/json 10.1.5. Responses Table 10.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetCAConfigResponse 0 An unexpected error response. GooglerpcStatus 10.1.6. Samples 10.1.7. Common object reference 10.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 10.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 10.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 10.1.7.3. V1GetCAConfigResponse Field Name Required Nullable Type Description Format helmValuesBundle byte[] byte 10.2. GetCRSs GET /v1/cluster-init/crs 10.2.1. Description 10.2.2. Parameters 10.2.3. Return Type V1CRSMetasResponse 10.2.4. Content Type application/json 10.2.5. Responses Table 10.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1CRSMetasResponse 0 An unexpected error response. GooglerpcStatus 10.2.6. Samples 10.2.7. Common object reference 10.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 10.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 10.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 10.2.7.3. StorageUser User is an object that allows us to track the roles a user is tied to, and how they logged in. Field Name Required Nullable Type Description Format id String authProviderId String attributes List of StorageUserAttribute idpToken String 10.2.7.4. StorageUserAttribute Field Name Required Nullable Type Description Format key String value String 10.2.7.5. V1CRSMeta Field Name Required Nullable Type Description Format id String name String createdAt Date date-time createdBy StorageUser expiresAt Date date-time 10.2.7.6. V1CRSMetasResponse Field Name Required Nullable Type Description Format items List of V1CRSMeta 10.3. GenerateCRS POST /v1/cluster-init/crs 10.3.1. Description 10.3.2. Parameters 10.3.2.1. Body Parameter Name Description Required Default Pattern body V1CRSGenRequest X 10.3.3. Return Type V1CRSGenResponse 10.3.4. Content Type application/json 10.3.5. Responses Table 10.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1CRSGenResponse 0 An unexpected error response. GooglerpcStatus 10.3.6. Samples 10.3.7. Common object reference 10.3.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 10.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 10.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 10.3.7.3. StorageUser User is an object that allows us to track the roles a user is tied to, and how they logged in. Field Name Required Nullable Type Description Format id String authProviderId String attributes List of StorageUserAttribute idpToken String 10.3.7.4. StorageUserAttribute Field Name Required Nullable Type Description Format key String value String 10.3.7.5. V1CRSGenRequest Field Name Required Nullable Type Description Format name String 10.3.7.6. V1CRSGenResponse Field Name Required Nullable Type Description Format meta V1CRSMeta crs byte[] byte 10.3.7.7. V1CRSMeta Field Name Required Nullable Type Description Format id String name String createdAt Date date-time createdBy StorageUser expiresAt Date date-time 10.4. RevokeCRS PATCH /v1/cluster-init/crs/revoke RevokeCRSBundle deletes cluster registration secrets. 10.4.1. Description 10.4.2. Parameters 10.4.2.1. Body Parameter Name Description Required Default Pattern body V1CRSRevokeRequest X 10.4.3. Return Type V1CRSRevokeResponse 10.4.4. Content Type application/json 10.4.5. Responses Table 10.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1CRSRevokeResponse 0 An unexpected error response. GooglerpcStatus 10.4.6. Samples 10.4.7. Common object reference 10.4.7.1. CRSRevokeResponseCRSRevocationError Field Name Required Nullable Type Description Format id String error String 10.4.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 10.4.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 10.4.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 10.4.7.4. V1CRSRevokeRequest Field Name Required Nullable Type Description Format ids List of string 10.4.7.5. V1CRSRevokeResponse Field Name Required Nullable Type Description Format crsRevocationErrors List of CRSRevokeResponseCRSRevocationError revokedIds List of string 10.5. GetInitBundles GET /v1/cluster-init/init-bundles 10.5.1. Description 10.5.2. Parameters 10.5.3. Return Type V1InitBundleMetasResponse 10.5.4. Content Type application/json 10.5.5. Responses Table 10.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1InitBundleMetasResponse 0 An unexpected error response. GooglerpcStatus 10.5.6. Samples 10.5.7. Common object reference 10.5.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 10.5.7.2. InitBundleMetaImpactedCluster Field Name Required Nullable Type Description Format name String id String 10.5.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 10.5.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 10.5.7.4. StorageUser User is an object that allows us to track the roles a user is tied to, and how they logged in. Field Name Required Nullable Type Description Format id String authProviderId String attributes List of StorageUserAttribute idpToken String 10.5.7.5. StorageUserAttribute Field Name Required Nullable Type Description Format key String value String 10.5.7.6. V1InitBundleMeta Field Name Required Nullable Type Description Format id String name String impactedClusters List of InitBundleMetaImpactedCluster createdAt Date date-time createdBy StorageUser expiresAt Date date-time 10.5.7.7. V1InitBundleMetasResponse Field Name Required Nullable Type Description Format items List of V1InitBundleMeta 10.6. GenerateInitBundle POST /v1/cluster-init/init-bundles 10.6.1. Description 10.6.2. Parameters 10.6.2.1. Body Parameter Name Description Required Default Pattern body V1InitBundleGenRequest X 10.6.3. Return Type V1InitBundleGenResponse 10.6.4. Content Type application/json 10.6.5. Responses Table 10.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1InitBundleGenResponse 0 An unexpected error response. GooglerpcStatus 10.6.6. Samples 10.6.7. Common object reference 10.6.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 10.6.7.2. InitBundleMetaImpactedCluster Field Name Required Nullable Type Description Format name String id String 10.6.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 10.6.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 10.6.7.4. StorageUser User is an object that allows us to track the roles a user is tied to, and how they logged in. Field Name Required Nullable Type Description Format id String authProviderId String attributes List of StorageUserAttribute idpToken String 10.6.7.5. StorageUserAttribute Field Name Required Nullable Type Description Format key String value String 10.6.7.6. V1InitBundleGenRequest Field Name Required Nullable Type Description Format name String 10.6.7.7. V1InitBundleGenResponse Field Name Required Nullable Type Description Format meta V1InitBundleMeta helmValuesBundle byte[] byte kubectlBundle byte[] byte 10.6.7.8. V1InitBundleMeta Field Name Required Nullable Type Description Format id String name String impactedClusters List of InitBundleMetaImpactedCluster createdAt Date date-time createdBy StorageUser expiresAt Date date-time 10.7. RevokeInitBundle PATCH /v1/cluster-init/init-bundles/revoke RevokeInitBundle deletes cluster init bundle. If this operation impacts any cluster then its ID should be included in request. If confirm_impacted_clusters_ids does not match with current impacted clusters then request will fail with error that includes all impacted clusters. 10.7.1. Description 10.7.2. Parameters 10.7.2.1. Body Parameter Name Description Required Default Pattern body V1InitBundleRevokeRequest X 10.7.3. Return Type V1InitBundleRevokeResponse 10.7.4. Content Type application/json 10.7.5. Responses Table 10.7. HTTP Response Codes Code Message Datatype 200 A successful response. V1InitBundleRevokeResponse 0 An unexpected error response. GooglerpcStatus 10.7.6. Samples 10.7.7. Common object reference 10.7.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 10.7.7.2. InitBundleMetaImpactedCluster Field Name Required Nullable Type Description Format name String id String 10.7.7.3. InitBundleRevokeResponseInitBundleRevocationError Field Name Required Nullable Type Description Format id String error String impactedClusters List of InitBundleMetaImpactedCluster 10.7.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 10.7.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 10.7.7.5. V1InitBundleRevokeRequest Field Name Required Nullable Type Description Format ids List of string confirmImpactedClustersIds List of string 10.7.7.6. V1InitBundleRevokeResponse Field Name Required Nullable Type Description Format initBundleRevocationErrors List of InitBundleRevokeResponseInitBundleRevocationError initBundleRevokedIds List of string | [
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/clusterinitservice |
Chapter 15. Types of assets | Chapter 15. Types of assets Anything that can be versioned in the Business Central repository is an asset. A project can contain rules, packages, business processes, decision tables, fact models, domain specific languages (DSLs) or any other assets that are specific to your project's requirements. The following image shows the available assets in Red Hat Process Automation Manager 7.13. Note Case Management (Preview) and Case Definition asset types are only available in case projects. The following sections describe each asset type in Red Hat Process Automation Manager 7.13. Business Process Business processes are diagrams that describe the steps necessary to achieve business goals. Case Management (Preview) Case management is an extension of Business Process Management (BPM) that enables you to manage adaptable business processes. Case management provides problem resolution for non-repeatable, unpredictable processes as opposed to the efficiency-oriented approach of BPM for routine, predictable tasks. It manages one-off situations when the process cannot be predicted in advance. Important The business process application example includes features that are Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and are not recommended for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Case Definition Cases are designed using the Case definition process designer in Business Central. The case design is the basis of case management and sets out the specific goals and tasks for each case. The case flow can be modified dynamically during run time by adding dynamic tasks or processes. Data Object Data objects are the building blocks for the rule assets that you create. Data objects are custom data types implemented as Java objects in specified packages of your project. For example, you might create a Person object with data fields Name, Address, and Date of Birth to specify personal details for loan application rules. These custom data types determine what data your assets and your decision service are based on. Decision Table (Spreadsheet) Decision tables are collections of rules stored in either a spreadsheet or in the Red Hat Decision Manager user interface as guided decision tables. After you define your rules in an external XLS or XLSX file, you can upload the file as a decision table in your project in Business Central. Important You should typically upload only one spreadsheet of decision tables, containing all necessary RuleTable definitions, per rule package in Business Central. You can upload separate decision table spreadsheets for separate packages, but uploading multiple spreadsheets in the same package can cause compilation errors from conflicting RuleSet or RuleTable attributes and is therefore not recommended. DMN Decision Model and Notation (DMN) creates a standardized bridge for the gap between the business decision design and decision implementation. You can use the DMN designer in Business Central to design DMN decision requirements diagrams (DRDs) and define decision logic for a complete and functional DMN decision model. DRL file A rule file is typically a file with a .drl extension. In a DRL file you can have multiple rules, queries and functions, as well as some resource declarations like imports, globals and attributes that are assigned and used by your rules and queries. However, you are also able to spread your rules across multiple rule files (in that case, the extension .rule is suggested, but not required) - spreading rules across files can help with managing large numbers of rules. A DRL file is simply a text file. DSL definition Domain Specific Languages (DSLs) are a way of creating a rule language that is dedicated to your problem domains. A set of DSL definitions consists of transformations from DSL "sentences" to DRL constructs, which lets you use of all the underlying rule language and decision engine features. Enumeration Data enumerations are an optional asset type that can be configured to provide drop-down lists for the guided designer. They are stored and edited just like any other asset, and apply to the package that they belong to. Form Forms are used for collecting user data for business process. Business Central provides the option to automatically generate forms, which can then be edited to meet specific business process requirements. Global Variable(s) Global variables are used to make application objects available to the rules. Typically, they are used to provide data or services that the rules use, especially application services used in rule consequences, and to return data from the rules, like logs or values added in rule consequences, or for the rules to interact with the application, doing callbacks. Guided Decision Table Decision tables are collections of rules stored in either a spreadsheet or in the Red Hat Decision Manager user interface as guided decision tables. Guided Decision Table Graph A Guided Decision Table Graph is a collection of related guided decision tables that are displayed within a single designer. You can use this designer to better visualize and work with various related decision tables in one location. Additionally, when a condition or an action in one table uses the same data type as a condition or an action in another table, the tables will be physically linked with a line in the table graph designer. For example, if one decision table determines a loan application rate and another table uses the application rate to determine some other action, then the two decision tables are linked in a guided decision table graph. Guided Rule Rules provide the logic for the decision engine to execute against. A rule includes a name, attributes, a when statement on the left hand side of the rule, and a then statement on the right hand side of the rule. Guided Rule Template Guided rule templates provide a reusable rule structure for multiple rules that are compiled into Drools Rule Language (DRL) and form the core of the decision service for your project. Package All assets are contained in packages in Business Central. A package is a folder for rules and also serves as a "namespace". Solver configuration A Solver configuration is created by the Solver designer and can be run in the Execution Solver or plain Java code after the KJAR is deployed. You can edit and create Solver configurations in Business Central. Test Scenario Test scenarios in Red Hat Process Automation Manager enable you to validate the functionality of rules, models, and events before deploying them into production. A test scenario uses data for conditions that resemble an instance of your fact or project model. This data is matched against a given set of rules and if the expected results match the actual results, the test is successful. If the expected results do not match the actual results, then the test fails. Test Scenario (Legacy) Red Hat Process Automation Manager 7.13 includes support for the legacy Test Scenario because the default Test Scenario asset is still in development. Work Item definition A work item definition defines how a custom task is presented. For example, the task name, icon, parameters, and similar attributes. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_and_managing_red_hat_process_automation_manager_services/assets_types_ref |
Chapter 4. CSIStorageCapacity [storage.k8s.io/v1] | Chapter 4. CSIStorageCapacity [storage.k8s.io/v1] Description CSIStorageCapacity stores the result of one CSI GetCapacity call. For a given StorageClass, this describes the available capacity in a particular topology segment. This can be used when considering where to instantiate new PersistentVolumes. For example this can express things like: - StorageClass "standard" has "1234 GiB" available in "topology.kubernetes.io/zone=us-east1" - StorageClass "localssd" has "10 GiB" available in "kubernetes.io/hostname=knode-abc123" The following three cases all imply that no capacity is available for a certain combination: - no object exists with suitable topology and storage class name - such an object exists, but the capacity is unset - such an object exists, but the capacity is zero The producer of these objects can decide which approach is more suitable. They are consumed by the kube-scheduler when a CSI driver opts into capacity-aware scheduling with CSIDriverSpec.StorageCapacity. The scheduler compares the MaximumVolumeSize against the requested size of pending volumes to filter out unsuitable nodes. If MaximumVolumeSize is unset, it falls back to a comparison against the less precise Capacity. If that is also unset, the scheduler assumes that capacity is insufficient and tries some other node. Type object Required storageClassName 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources capacity Quantity Capacity is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the fields. The semantic is currently (CSI spec 1.2) defined as: The available capacity, in bytes, of the storage that can be used to provision volumes. If not set, that information is currently unavailable. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds maximumVolumeSize Quantity MaximumVolumeSize is the value reported by the CSI driver in its GetCapacityResponse for a GetCapacityRequest with topology and parameters that match the fields. This is defined since CSI spec 1.4.0 as the largest size that may be used in a CreateVolumeRequest.capacity_range.required_bytes field to create a volume with the same parameters as those in GetCapacityRequest. The corresponding value in the Kubernetes API is ResourceRequirements.Requests in a volume claim. metadata ObjectMeta Standard object's metadata. The name has no particular meaning. It must be be a DNS subdomain (dots allowed, 253 characters). To ensure that there are no conflicts with other CSI drivers on the cluster, the recommendation is to use csisc-<uuid>, a generated name, or a reverse-domain name which ends with the unique CSI driver name. Objects are namespaced. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata nodeTopology LabelSelector NodeTopology defines which nodes have access to the storage for which capacity was reported. If not set, the storage is not accessible from any node in the cluster. If empty, the storage is accessible from all nodes. This field is immutable. storageClassName string The name of the StorageClass that the reported capacity applies to. It must meet the same requirements as the name of a StorageClass object (non-empty, DNS subdomain). If that object no longer exists, the CSIStorageCapacity object is obsolete and should be removed by its creator. This field is immutable. 4.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csistoragecapacities GET : list or watch objects of kind CSIStorageCapacity /apis/storage.k8s.io/v1/watch/csistoragecapacities GET : watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities DELETE : delete collection of CSIStorageCapacity GET : list or watch objects of kind CSIStorageCapacity POST : create a CSIStorageCapacity /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities GET : watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities/{name} DELETE : delete a CSIStorageCapacity GET : read the specified CSIStorageCapacity PATCH : partially update the specified CSIStorageCapacity PUT : replace the specified CSIStorageCapacity /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities/{name} GET : watch changes to an object of kind CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 4.2.1. /apis/storage.k8s.io/v1/csistoragecapacities Table 4.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind CSIStorageCapacity Table 4.2. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacityList schema 401 - Unauthorized Empty 4.2.2. /apis/storage.k8s.io/v1/watch/csistoragecapacities Table 4.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. Table 4.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.3. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities Table 4.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CSIStorageCapacity Table 4.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 4.8. Body parameters Parameter Type Description body DeleteOptions schema Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSIStorageCapacity Table 4.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacityList schema 401 - Unauthorized Empty HTTP method POST Description create a CSIStorageCapacity Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body CSIStorageCapacity schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 202 - Accepted CSIStorageCapacity schema 401 - Unauthorized Empty 4.2.4. /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities Table 4.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 4.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead. Table 4.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 4.2.5. /apis/storage.k8s.io/v1/namespaces/{namespace}/csistoragecapacities/{name} Table 4.18. Global path parameters Parameter Type Description name string name of the CSIStorageCapacity namespace string object name and auth scope, such as for teams and projects Table 4.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CSIStorageCapacity Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.21. Body parameters Parameter Type Description body DeleteOptions schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSIStorageCapacity Table 4.23. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSIStorageCapacity Table 4.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 4.25. Body parameters Parameter Type Description body Patch schema Table 4.26. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSIStorageCapacity Table 4.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.28. Body parameters Parameter Type Description body CSIStorageCapacity schema Table 4.29. HTTP responses HTTP code Reponse body 200 - OK CSIStorageCapacity schema 201 - Created CSIStorageCapacity schema 401 - Unauthorized Empty 4.2.6. /apis/storage.k8s.io/v1/watch/namespaces/{namespace}/csistoragecapacities/{name} Table 4.30. Global path parameters Parameter Type Description name string name of the CSIStorageCapacity namespace string object name and auth scope, such as for teams and projects Table 4.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CSIStorageCapacity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 4.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/storage_apis/csistoragecapacity-storage-k8s-io-v1 |
8.64. gcc-libraries | 8.64. gcc-libraries 8.64.1. RHBA-2014:1438 - gcc-libraries bug fix and enhancement update Updated gcc-libraries packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The gcc-libraries packages contain various GNU Compiler Collection (GCC) runtime libraries, such as libatomic and libitm. Note The gcc-libraries packages have been upgraded to upstream version 4.9.0, which provides a number of bug fixes and enhancements over the version to match the features in Red Hat Developer Toolset 3.0. Among others, this update adds the libcilkrts library to gcc-libraries. (BZ# 1062230 , BZ# 1097800 ) Users of gcc-libraries are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/gcc-libraries |
5.9. augeas | 5.9. augeas 5.9.1. RHBA-2012:0967 - augeas bug fix and enhancement update Updated augeas packages that fix three bugs and add two enhancements are now available for Red Hat Enterprise Linux 6. Augeas is a configuration editing tool. Augeas parses configuration files in their native formats and transforms them into a tree. Configuration changes are made by manipulating this tree and saving it back into native configuration files. Bug Fixes BZ# 759311 Previously, the "--autosave" option did not work correctly when using Augeas in batch mode, which caused that configuration changes were not saved. As a consequence, configuration changes could be saved only in interactive mode. This update ensures that the "--autosave" option functions in batch mode as expected. BZ# 781690 Prior to this update, when parsing GRUB configuration files, Augeas did not parse the "--encrypted" option of the "password" command correctly. Instead, it parsed the "--encrypted" part as the password, and the password hash as a second "menu.lst" filename. This update ensures that the "--encrypted" option of the password command is parsed correctly when parsing GRUB configuration files. BZ# 820864 Previously, Augeas was not able to parse the /etc/fstab file containing mount options with an equals sign but no value. This update fixes the fstab lens so that it can handle such mount options. As a result, Augeas can now parse an /etc/fstab file containing mount options with an equals sign but no value correctly. Enhancements BZ# 628507 Previously, the finite-automata-DOT graph tool (fadot) did not support the -h option. Consequently, when fadot was launched with the -h option the "Unknown option" message was displayed. This update adds support for the -h option and ensures that a help message is displayed when fadot is launched with the option. BZ# 808662 Previously, Augeas did not have a lens to parse the /etc/mdadm.conf file. Consequently, the tool for conversion of physical servers to virtual guests, Virt-P2V, could not convert physical hosts on MD devices. This update adds a new lens to parse the /etc/mdadm.conf file, enabling Virt-P2V to convert physical hosts on MD devices as expected. All users of Augeas are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/augeas |
Chapter 4. External storage services | Chapter 4. External storage services Red Hat OpenShift Data Foundation can use IBM FlashSystems or make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare Metal Red Hat OpenStack platform (Technology preview) The OpenShift Data Foundation operators create and manage services to satisfy persistent volume and object bucket claims against external services. External cluster can serve Block, File and Object storage classes for applications running on OpenShift Container Platform. External clusters are not deployed or managed by operators. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/planning_your_deployment/external-storage-services_rhodf |
Chapter 1. Red Hat OpenShift support for Windows Containers overview | Chapter 1. Red Hat OpenShift support for Windows Containers overview Red Hat OpenShift support for Windows Containers is a feature providing the ability to run Windows compute nodes in an OpenShift Container Platform cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform. Windows instances deployed by the WMCO are configured with the containerd container runtime. For more information, see the release notes . You can add Windows nodes either by creating a compute machine set or by specifying existing Bring-Your-Own-Host (BYOH) Window instances through a configuration map . Note Compute machine sets are not supported for bare metal or provider agnostic clusters. For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads . You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads . You can create a Windows MachineSet object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet object on multiple platforms. You can schedule Windows workloads to Windows compute nodes. You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates. You can remove a Windows node by deleting a specific machine. You can use Bring-Your-Own-Host (BYOH) Windows instances to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users who are looking to mitigate major disruptions in the event that a Windows server goes offline. You can use BYOH Windows instances as nodes on OpenShift Container Platform 4.8 and later versions. You can disable Windows container workloads by performing the following: Uninstalling the Windows Machine Config Operator Deleting the Windows Machine Config Operator namespace | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/windows_container_support_for_openshift/windows-container-overview |
Chapter 10. Set up GitHub build trigger tags | Chapter 10. Set up GitHub build trigger tags Red Hat Quay supports using GitHub or GitHub Enterprise as a trigger to building images. If you have not yet done so, go ahead and enable build support in Red Hat Quay . 10.1. Understanding tag naming for build triggers Prior to Red Hat Quay 3.3, how images created from build triggers were named was limited. Images built by build triggers were named: With the branch or tag whose change invoked the trigger With a latest tag for images that used the default branch As of Red Hat Quay 3.3 and later, you have more flexibility in how you set image tags. The first thing you can do is enter custom tags, to have any string of characters assigned as a tag for each built image. However, as an alternative, you could use the following tag templates to to tag images with information from each commit: USD{commit_info.short_sha} : The commit's short SHA USD{commit_info.date} : The timestamp for the commit USD{commit_info.author} : The author from the commit USD{commit_info.committer} : The committer of the commit USD{parsed_ref.branch} : The branch name The following procedure describes how you set up tagging for build triggers. 10.2. Setting tag names for build triggers Follow these steps to configure custom tags for build triggers: From the repository view, select the Builds icon from the left navigation. Select the Create Build Trigger menu, and select the type of repository push you want (GitHub, Bitbucket, GitLab, or Custom Git repository push). For this example, GitHub Repository Push is chosen, as illustrated in the following figure. When the Setup Build Trigger page appears, select the repository and namespace in which you want the trigger set up. Under Configure Trigger, select either Trigger for all branches and tags or Trigger only on branches and tags matching a regular expression . Then select Continue. The Configure Tagging section appears, as shown in the following figure: Scroll down to Configure Tagging and select from the following options: Tag manifest with the branch or tag name : Check this box to use the name of the branch or tag in which the commit occurred as the tag used on the image. This is enabled by default. Add latest tag if on default branch : Check this box to use the latest tag for the image if it is on the default branch for the repository. This is enabled by default. Add custom tagging templates : Enter a custom tag or a template into the Enter a tag template box. There are multiple tag templates you can enter here, as described earlier in this section. They include ways of using short SHA, timestamps, author name, committer, and branch name from the commit as tags. Select Continue. You are prompted to select the directory build context for the Docker build. The build context directory identifies the location of the directory containing the Dockerfile, along with other files needed when the build is triggered. Enter "/" if the Dockerfile is in the root of the git repository. Select Continue. You are prompted to add an optional Robot Account. Do this if you want to pull a private base image during the build process. The robot account would need access to the build. Select Continue to complete the setup of the build trigger. If you were to return to the Repository Builds page for the repository, the build triggers you set up will be listed under the Build Triggers heading. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/use_red_hat_quay/github-build-triggers |
Chapter 1. GFS Overview | Chapter 1. GFS Overview Red Hat GFS is a cluster file system that is available with Red Hat Cluster Suite. Red Hat GFS nodes are configured and managed with Red Hat Cluster Suite configuration and management tools. Red Hat GFS provides data sharing among GFS nodes in a Red Hat cluster. GFS provides a single, consistent view of the file-system name space across the GFS nodes in a Red Hat cluster. GFS allows applications to install and run without much knowledge of the underlying storage infrastructure. GFS is fully compliant with the IEEE POSIX interface, allowing applications to perform file operations as if they were running on a local file system. Also, GFS provides features that are typically required in enterprise environments, such as quotas, multiple journals, and multipath support. GFS provides a versatile method of networking your storage according to the performance, scalability, and economic needs of your storage environment. This chapter provides some very basic, abbreviated information as background to help you understand GFS. It contains the following sections: Section 1.1, "Performance, Scalability, and Economy" Section 1.2, "GFS Functions" Section 1.3, "GFS Software Subsystems" Section 1.4, "Before Setting Up GFS" 1.1. Performance, Scalability, and Economy You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and economy. For superior performance and scalability, you can deploy GFS in a cluster that is connected directly to a SAN. For more economical needs, you can deploy GFS in a cluster that is connected to a LAN with servers that use GNBD (Global Network Block Device). The following sections provide examples of how GFS can be deployed to suit your needs for performance, scalability, and economy: Section 1.1.1, "Superior Performance and Scalability" Section 1.1.2, "Performance, Scalability, Moderate Price" Section 1.1.3, "Economy and Performance" Note The deployment examples in this chapter reflect basic configurations; your needs might require a combination of configurations shown in the examples. 1.1.1. Superior Performance and Scalability You can obtain the highest shared-file performance when applications access storage directly. The GFS SAN configuration in Figure 1.1, "GFS with a SAN" provides superior file performance for shared files and file systems. Linux applications run directly on GFS nodes. Without file protocols or storage servers to slow data access, performance is similar to individual Linux servers with directly connected storage; yet, each GFS application node has equal access to all data files. GFS supports up to 16 GFS nodes. Figure 1.1. GFS with a SAN 1.1.2. Performance, Scalability, Moderate Price Multiple Linux client applications on a LAN can share the same SAN-based data as shown in Figure 1.2, "GFS and GNBD with a SAN" . SAN block storage is presented to network clients as block storage devices by GNBD servers. From the perspective of a client application, storage is accessed as if it were directly attached to the server in which the application is running. Stored data is actually on the SAN. Storage devices and data can be equally shared by network client applications. File locking and sharing functions are handled by GFS for each network client. Note Clients implementing ext2 and ext3 file systems can be configured to access their own dedicated slice of SAN storage. Figure 1.2. GFS and GNBD with a SAN 1.1.3. Economy and Performance Figure 1.3, "GFS and GNBD with Directly Connected Storage" shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices. Client data files and file systems can be shared with GFS on each client. Application failover can be fully automated with Red Hat Cluster Suite. Figure 1.3. GFS and GNBD with Directly Connected Storage | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/ch-overview-GFS |
Managing clusters | Managing clusters OpenShift Cluster Manager 1-latest Using Red Hat OpenShift Cluster Manager to work with your OpenShift clusters Red Hat Customer Content Services | [
"https://console.redhat.com/openshift/",
"oc login -u=<username> -p=<password> --server=<your-openshift-server> --insecure-skip-tls-verify",
"oc get nodes -o wide",
"oc describe nodes | egrep 'Name:|InternalIP:|cpu:'",
"oc get clusterversion <version> -o jsonpath='{.spec.clusterID}{\"\\n\"}'",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret.txt",
"oc create secret generic pull-secret -n openshift-config --type=kubernetes.io/dockerconfigjson --from-file=.dockerconfigjson=/path/to/downloaded/pull-secret",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret.txt",
"oc create secret generic pull-secret -n openshift-config --type=kubernetes.io/dockerconfigjson --from-file=.dockerconfigjson=/path/to/downloaded/pull-secret",
"oc get pods -n openshift-monitoring -l app.kubernetes.io/name=telemeter-client",
"oc delete pod -n openshift-monitoring -l app.kubernetes.io/name=telemeter-client oc delete pod -n openshift-insights -l app=insights-operator",
"oc get secret pull-secret -n openshift-config -o jsonpath='{.data.\\.dockerconfigjson}' | base64 -d | jq"
] | https://docs.redhat.com/en/documentation/openshift_cluster_manager/1-latest/html-single/managing_clusters/index |
Chapter 6. Custom image builds with Buildah | Chapter 6. Custom image builds with Buildah With OpenShift Container Platform 4.15, a docker socket will not be present on the host nodes. This means the mount docker socket option of a custom build is not guaranteed to provide an accessible docker socket for use within a custom build image. If you require this capability in order to build and push images, add the Buildah tool your custom build image and use it to build and push the image within your custom build logic. The following is an example of how to run custom builds with Buildah. Note Using the custom build strategy requires permissions that normal users do not have by default because it allows the user to execute arbitrary code inside a privileged container running on the cluster. This level of access can be used to compromise the cluster and therefore should be granted only to users who are trusted with administrative privileges on the cluster. 6.1. Prerequisites Review how to grant custom build permissions . 6.2. Creating custom build artifacts You must create the image you want to use as your custom build image. Procedure Starting with an empty directory, create a file named Dockerfile with the following content: FROM registry.redhat.io/rhel8/buildah # In this example, `/tmp/build` contains the inputs that build when this # custom builder image is run. Normally the custom builder image fetches # this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh # /usr/bin/build.sh contains the actual custom build logic that will be run when # this custom builder image is run. ENTRYPOINT ["/usr/bin/build.sh"] In the same directory, create a file named dockerfile.sample . This file is included in the custom build image and defines the image that is produced by the custom build: FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build In the same directory, create a file named build.sh . This file contains the logic that is run when the custom build runs: #!/bin/sh # Note that in this case the build inputs are part of the custom builder image, but normally this # is retrieved from an external source. cd /tmp/input # OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom # build framework TAG="USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}" # performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . # buildah requires a slight modification to the push secret provided by the service # account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo "{ \"auths\": " ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo "}") > /tmp/.dockercfg # push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG} 6.3. Build custom builder image You can use OpenShift Container Platform to build and push custom builder images to use in a custom strategy. Prerequisites Define all the inputs that will go into creating your new custom builder image. Procedure Define a BuildConfig object that will build your custom builder image: USD oc new-build --binary --strategy=docker --name custom-builder-image From the directory in which you created your custom build image, run the build: USD oc start-build custom-builder-image --from-dir . -F After the build completes, your new custom builder image is available in your project in an image stream tag that is named custom-builder-image:latest . 6.4. Use custom builder image You can define a BuildConfig object that uses the custom strategy in conjunction with your custom builder image to execute your custom build logic. Prerequisites Define all the required inputs for new custom builder image. Build your custom builder image. Procedure Create a file named buildconfig.yaml . This file defines the BuildConfig object that is created in your project and executed: kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest 1 Specify your project name. Create the BuildConfig object by entering the following command: USD oc create -f buildconfig.yaml Create a file named imagestream.yaml . This file defines the image stream to which the build will push the image: kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {} Create the image stream by entering the following command: USD oc create -f imagestream.yaml Run your custom build by entering the following command: USD oc start-build sample-custom-build -F When the build runs, it launches a pod running the custom builder image that was built earlier. The pod runs the build.sh logic that is defined as the entrypoint for the custom builder image. The build.sh logic invokes Buildah to build the dockerfile.sample that was embedded in the custom builder image, and then uses Buildah to push the new image to the sample-custom image stream . | [
"FROM registry.redhat.io/rhel8/buildah In this example, `/tmp/build` contains the inputs that build when this custom builder image is run. Normally the custom builder image fetches this content from some location at build time, by using git clone as an example. ADD dockerfile.sample /tmp/input/Dockerfile ADD build.sh /usr/bin RUN chmod a+x /usr/bin/build.sh /usr/bin/build.sh contains the actual custom build logic that will be run when this custom builder image is run. ENTRYPOINT [\"/usr/bin/build.sh\"]",
"FROM registry.access.redhat.com/ubi9/ubi RUN touch /tmp/build",
"#!/bin/sh Note that in this case the build inputs are part of the custom builder image, but normally this is retrieved from an external source. cd /tmp/input OUTPUT_REGISTRY and OUTPUT_IMAGE are env variables provided by the custom build framework TAG=\"USD{OUTPUT_REGISTRY}/USD{OUTPUT_IMAGE}\" performs the build of the new image defined by dockerfile.sample buildah --storage-driver vfs bud --isolation chroot -t USD{TAG} . buildah requires a slight modification to the push secret provided by the service account to use it for pushing the image cp /var/run/secrets/openshift.io/push/.dockercfg /tmp (echo \"{ \\\"auths\\\": \" ; cat /var/run/secrets/openshift.io/push/.dockercfg ; echo \"}\") > /tmp/.dockercfg push the new image to the target for the build buildah --storage-driver vfs push --tls-verify=false --authfile /tmp/.dockercfg USD{TAG}",
"oc new-build --binary --strategy=docker --name custom-builder-image",
"oc start-build custom-builder-image --from-dir . -F",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: sample-custom-build labels: name: sample-custom-build annotations: template.alpha.openshift.io/wait-for-ready: 'true' spec: strategy: type: Custom customStrategy: forcePull: true from: kind: ImageStreamTag name: custom-builder-image:latest namespace: <yourproject> 1 output: to: kind: ImageStreamTag name: sample-custom:latest",
"oc create -f buildconfig.yaml",
"kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: sample-custom spec: {}",
"oc create -f imagestream.yaml",
"oc start-build sample-custom-build -F"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/builds_using_buildconfig/custom-builds-buildah |
Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform | Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform Red Hat build of Quarkus 3.15 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/deploying_your_red_hat_build_of_quarkus_applications_to_openshift_container_platform/index |
Appendix B. Working with certmonger | Appendix B. Working with certmonger Part of managing machine authentication is managing machine certificates. On clients, IdM manages the certificate lifecycle with the certmonger service, which works together with the certificate authority (CA) provided by IdM. The certmonger daemon and its command-line clients simplify the process of generating public/private key pairs, creating certificate requests, and submitting requests to the CA for signing. As part of managing certificates, the certmonger daemon monitors certificates for expiration and can renew certificates that are about to expire. The certificates that certmonger monitors are tracked in files stored in a configurable directory. The default location is /var/lib/certmonger/requests . certmonger uses the IdM getcert command to manage all certificates. As covered in Section 3.4, "Examples: Installing with Different CA Configurations" , an IdM server can be configured to use different types of certificate authorities. The most common (and recommended) configuration is to use a full CA server, but it is also possible to use a much more limited, self-signed CA. The exact getcert command used by certmonger to communicate with the IdM backend depends on which type of CA is used. The ipa-getcert command is used with a full CA, while the selfsign-getcert command is used with a self-signed CA. Note Because of general security issues, self-signed certificates are not typically used in production, but can be used for development and testing. B.1. Requesting a Certificate with certmonger With the IdM CA, certmonger uses the ipa-getcert command. Certificates and keys are stored locally in plaintext files ( .pem ) or in an NSS database, identified by the certificate nickname. When requesting a certificate, then, the request should identify the location where the certificate will be stored and the nickname of the certificate. For example: The /etc/pki/nssdb file is the global NSS database, and Server-Cert is the nickname of this certificate. The certificate nickname must be unique within this database. When requesting a certificate to be used with an IdM service, the -K option is required to specify the service principal. Otherwise, certmonger assumes the certificate is for a host. The -N option must specify the certificate subject DN, and the subject base DN must match the base DN for the IdM server, or the request is rejected. Example B.1. Using certmonger for a Service The options vary depending on whether you are using a self-signed certificate ( selfsign-getcert ) and the desired configuration for the final certificate, as well as other settings. In Example B.1, "Using certmonger for a Service" , these are common options: The -r option will automatically renew the certificate if the key pair already exists. This is used by default. The -f option stores the certificate in the given file. The -k option either stores the key in the given file or, if the key file already exists, uses the key in the file. The -N option gives the subject name. The -D option gives the DNS domain name. The -U option sets the extended key usage flag. | [
"ipa-getcert request -d /etc/pki/nssdb -n Server-Cert",
"ipa-getcert request -d /etc/httpd/alias -n Server-Cert -K HTTP/client1.example.com -N 'CN=client1.example.com,O=EXAMPLE.COM'",
"ipa-getcert request -r -f /etc/httpd/conf/ssl.crt/server.crt -k /etc/httpd/conf/ssl.key/server.key -N CN=`hostname --fqdn` -D `hostname` -U id-kp-serverAuth"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/certmongerx |
11.7. Choose a Boot Method | 11.7. Choose a Boot Method Installing from a DVD requires that you have purchased a Red Hat Enterprise Linux product, you have a Red Hat Enterprise Linux 6.9 DVD, and you have a DVD drive on a system that supports booting from it. Refer to Chapter 2, Making Media for instructions to make an installation DVD. Other than booting from an installation DVD, you can also boot the Red Hat Enterprise Linux installation program from minimal boot media in the form of a bootable CD. After you boot the system with boot CD, you complete the installation from a different installation source, such as a local hard drive or a location on a network. Refer to Section 2.2, "Making Minimal Boot Media" for instructions on making boot CDs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch11s07 |
Chapter 3. Adding the Red Hat Integration - AMQ Interconnect Operator | Chapter 3. Adding the Red Hat Integration - AMQ Interconnect Operator The Red Hat Integration - AMQ Interconnect Operator creates and manages AMQ Interconnect router networks in OpenShift Container Platform. This Operator must be installed separately for each project that uses it. The options for installing the Operator are: Section 3.1, "Installing the Operator using the CLI" Section 3.2, "Installing the Operator using the Operator Lifecycle Manager" Note Installing an Operator requires administrator-level privileges for your OpenShift cluster. 3.1. Installing the Operator using the CLI The procedures in this section show how to use the OpenShift command-line interface (CLI) to install and deploy the latest version of the Red Hat Integration - AMQ Interconnect Operator in a given OpenShift project. 3.1.1. Getting the Operator code This procedure shows how to access and prepare the code you need to install the latest version of the Operator for AMQ Interconnect 1.10. Procedure In your web browser, navigate to the Software Downloads page for AMQ Interconnect releases . Ensure that the value of the Version drop-down list is set to 1.10.7 and the Releases tab is selected. to AMQ Interconnect 1.10.7 Operator Installation and Example Files , click Download . Download of the amq-interconnect-operator-1.10.7-ocp-install-examples.zip compressed archive automatically begins. When the download has completed, move the archive to your chosen installation directory. The following example moves the archive to a directory called ~/router/operator . USD mkdir ~/router USD mv amq-interconnect-operator-1.10.7-ocp-install-examples.zip ~/router In your chosen installation directory, extract the contents of the archive. For example: USD cd ~/router USD unzip amq-interconnect-operator-1.10.7-ocp-install-examples.zip Switch to the directory that was created when you extracted the archive. For example: USD cd operator Log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Specify the project in which you want to install the Operator. You can create a new project or switch to an existing one. Create a new project: USD oc new-project <project-name> Or, switch to an existing project: USD oc project <project-name> Create a service account to use with the Operator. USD oc create -f deploy/service_account.yaml Create a role for the Operator. USD oc create -f deploy/role.yaml Create a role binding for the Operator. The role binding binds the previously-created service account to the Operator role, based on the names you specified. USD oc create -f deploy/role_binding.yaml In the procedure that follows, you deploy the Operator in your project. 3.1.2. Deploying the Operator using the CLI The procedure in this section shows how to use the OpenShift command-line interface (CLI) to deploy the latest version of the Operator for AMQ Interconnect 1.10 in your OpenShift project. Prerequisites You must have already prepared your OpenShift project for the Operator deployment. See Section 3.1.1, "Getting the Operator code" . Before you can follow the procedure in this section, you must first complete the steps described in Red Hat Container Registry Authentication . Procedure In the OpenShift command-line interface (CLI), log in to OpenShift Container Platform as a cluster administrator. For example: USD oc login -u system:admin Switch to the project that you previously prepared for the Operator deployment. For example: USD oc project <project-name> Switch to the directory that was created when you previously extracted the Operator installation archive. For example: USD cd ~/router/operator/qdr-operator-1.10-ocp-install-examples Deploy the CRD that is included with the Operator. You must install the CRD in your OpenShift cluster before deploying and starting the Operator. USD oc create -f deploy/crds/interconnectedcloud_v1alpha1_interconnect_crd.yaml Link the pull secret associated with the account used for authentication in the Red Hat Ecosystem Catalog with the default , deployer , and builder service accounts for your OpenShift project. USD oc secrets link --for=pull default <secret-name> USD oc secrets link --for=pull deployer <secret-name> USD oc secrets link --for=pull builder <secret-name> Note In OpenShift Container Platform 4.1 or later, you can also use the web console to associate a pull secret with a project in which you want to deploy container images such as the AMQ Interconnect Operator. To do this, click Administration Service Accounts . Specify the pull secret associated with the account that you use for authentication in the Red Hat Container Registry. Deploy the Operator. USD oc create -f deploy/operator.yaml Verify that the Operator is running: USD oc get pods -l name=qdr-operator If the output does not report the pod is running, use the following command to determine the issue that prevented it from running: Verify that the CRD is registered in the cluster and review the CRD details: USD oc get crd USD oc describe crd interconnects.interconnectedcloud.github.io Note It is recommended that you deploy only a single instance of the AMQ Interconnect Operator in a given OpenShift project. Setting the replicas element of your Operator deployment to a value greater than 1 , or deploying the Operator more than once in the same project is not recommended. Additional resources For an alternative method of installing the AMQ Interconnect Operator that uses the OperatorHub graphical interface, see Section 3.2, "Installing the Operator using the Operator Lifecycle Manager" . 3.2. Installing the Operator using the Operator Lifecycle Manager The procedures in this section show how to use the OperatorHub to install and deploy the latest version of the Red Hat Integration - AMQ Interconnect Operator in a given OpenShift project. In OpenShift Container Platform 4.1 and later, the Operator Lifecycle Manager (OLM) helps users install, update, and generally manage the lifecycle of all Operators and their associated services running across their clusters. It is part of the Operator Framework, an open source toolkit designed to manage Kubernetes native applications (Operators) in an effective, automated, and scalable way. Prerequisites Access to an OpenShift Container Platform 4.6, 4.7, 4.8, 4.9 or 4.10 cluster using a cluster-admin account. Red Hat Integration - AMQ Certificate Manager Operator is installed in the OpenShift Container Platform cluster if required. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Choose Red Hat Integration - AMQ Interconnect Operator from the list of available Operators, and then click Install . On the Operator Installation page, select the namespace into which you want to install the Operator, and then click Install . The Installed Operators page appears displaying the status of the Operator installation. Verify that the AMQ Interconnect Operator is displayed and wait until the Status changes to Succeeded . If the installation is not successful, troubleshoot the error: Click Red Hat Integration - AMQ Interconnect Operator on the Installed Operators page. Select the Subscription tab and view any failures or errors. | [
"mkdir ~/router mv amq-interconnect-operator-1.10.7-ocp-install-examples.zip ~/router",
"cd ~/router unzip amq-interconnect-operator-1.10.7-ocp-install-examples.zip",
"cd operator",
"oc login -u system:admin",
"oc new-project <project-name>",
"oc project <project-name>",
"oc create -f deploy/service_account.yaml",
"oc create -f deploy/role.yaml",
"oc create -f deploy/role_binding.yaml",
"oc login -u system:admin",
"oc project <project-name>",
"cd ~/router/operator/qdr-operator-1.10-ocp-install-examples",
"oc create -f deploy/crds/interconnectedcloud_v1alpha1_interconnect_crd.yaml",
"oc secrets link --for=pull default <secret-name> oc secrets link --for=pull deployer <secret-name> oc secrets link --for=pull builder <secret-name>",
"oc create -f deploy/operator.yaml",
"oc get pods -l name=qdr-operator",
"oc describe pod -l name=qdr-operator",
"oc get crd oc describe crd interconnects.interconnectedcloud.github.io"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/deploying_amq_interconnect_on_openshift/adding-operator-router-ocp |
Chapter 6. Network connections | Chapter 6. Network connections 6.1. Automatic failover A client can receive information about all master and slave brokers, so that in the event of a connection failure, it can reconnect to the slave broker. The slave broker then automatically re-creates any sessions and consumers that existed on each connection before failover. This feature saves you from having to hand-code manual reconnection logic in your applications. When a session is recreated on the slave, it does not have any knowledge of messages already sent or acknowledged. Any in-flight sends or acknowledgements at the time of failover might also be lost. However, even without transparent failover, it is simple to guarantee once and only once delivery, even in the case of failure, by using a combination of duplicate detection and retrying of transactions. Clients detect connection failure when they have not received packets from the broker within a configurable period of time. See Section 6.3, "Detecting dead connections" for more information. You have a number of methods to configure clients to receive information about master and slave. One option is to configure clients to connect to a specific broker and then receive information about the other brokers in the cluster. See Section 6.7, "Configuring static discovery" for more information. The most common way, however, is to use broker discovery . For details on how to configure broker discovery, see Section 6.6, "Configuring dynamic discovery" . Also, you can configure the client by adding parameters to the query string of the URI used to connect to the broker, as in the example below. Procedure To configure your clients for failover through the use of a query string, ensure the following components of the URI are set properly: The host:port portion of the URI must point to a master broker that is properly configured with a backup. This host and port is used only for the initial connection. The host:port value has nothing to do with the actual connection failover between a live and a backup server. In the example above, localhost:61616 is used for the host:port . (Optional) To use more than one broker as a possible initial connection, group the host:port entries as in the following example: Include the name-value pair ha=true as part of the query string to ensure the client receives information about each master and slave broker in the cluster. Include the name-value pair reconnectAttempts=n , where n is an integer greater than 0. This parameter sets the number of times the client attempts to reconnect to a broker. Note Failover occurs only if ha=true and reconnectAttempts is greater than 0. Also, the client must make an initial connection to the master broker in order to receive information about other brokers. If the initial connection fails, the client can only retry to establish it. See Section 6.1.1, "Failing over during the initial connection" for more information. 6.1.1. Failing over during the initial connection Because the client does not receive information about every broker until after the first connection to the HA cluster, there is a window of time where the client can connect only to the broker included in the connection URI. Therefore, if a failure happens during this initial connection, the client cannot failover to other master brokers, but can only try to re-establish the initial connection. Clients can be configured for a set number of reconnection attempts. Once the number of attempts has been made, an exception is thrown. Setting the number of reconnection attempts The examples below shows how to set the number of reconnection attempts to 3 using the AMQ Core Protocol JMS client. The default value is 0, that is, try only once. Procedure Set the number of reconnection attempts by passing a value to ServerLocator.setInitialConnectAttempts() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setInitialConnectAttempts(3); Setting a global number of reconnection attempts Alternatively, you can apply a global value for the maximum number of reconnection attempts within the broker's configuration. The maximum is applied to all client connections. Procedure Edit <broker-instance-dir>/etc/broker.xml by adding the initial-connect-attempts configuration element and providing a value for the time-to-live, as in the example below. <configuration> <core> ... <initial-connect-attempts>3</initial-connect-attempts> 1 ... </core> </configuration> 1 All clients connecting to the broker are allowed a maximum of three attempts to reconnect. The default is -1, which allows clients unlimited attempts. 6.1.2. Handling blocking calls during failover When failover occurs and the client is waiting for a response from the broker to continue its execution, the newly created session does not have any knowledge of the call that was in progress. The initial call might otherwise hang forever, waiting for a response that never comes. To prevent this, the broker is designed to unblock any blocking calls that were in progress at the time of failover by making them throw an exception. Client code can catch these exceptions and retry any operations if desired. When using AMQ Core Protocol JMS clients, if the unblocked method is a call to commit() or prepare() , the transaction is automatically rolled back and the broker throws an exception. 6.1.3. Handling failover with transactions When using AMQ Core Protocol JMS clients, if the session is transactional and messages have already been sent or acknowledged in the current transaction, the broker cannot be sure that those messages or their acknowledgements were lost during the failover. Consequently, the transaction is marked for rollback only. Any subsequent attempt to commit it throws an javax.jms.TransactionRolledBackException . Warning The caveat to this rule is when XA is used. If a two-phase commit is used and prepare() has already been called, rolling back could cause a HeuristicMixedException . Because of this, the commit throws an XAException.XA_RETRY exception, which informs the Transaction Manager it should retry the commit at some later point. If the original commit has not occurred, it still exists and can be committed. If the commit does not exist, it is assumed to have been committed, although the transaction manager might log a warning. A side effect of this exception is that any nonpersistent messages are lost. To avoid such losses, always use persistent messages when using XA. This is not an issue with acknowledgements since they are flushed to the broker before prepare() is called. The AMQ Core Protocol JMS client code must catch the exception and perform any necessary client side rollback. There is no need to roll back the session, however, because it was already rolled back. The user can then retry the transactional operations again on the same session. If failover occurs when a commit call is being executed, the broker unblocks the call to prevent the AMQ Core Protocol JMS client from waiting indefinitely for a response. Consequently, the client cannot determine whether the transaction commit was actually processed on the master broker before failure occurred. To remedy this, the AMQ Core Protocol JMS client can enable duplicate detection in the transaction, and retry the transaction operations again after the call is unblocked. If the transaction was successfully committed on the master broker before failover, duplicate detection ensures that any durable messages present in the transaction when it is retried are ignored on the broker side. This prevents messages from being sent more than once. If the session is non transactional, messages or acknowledgements can be lost in case of failover. If you want to provide once and only once delivery guarantees for non transacted sessions, enable duplicate detection and catch unblock exceptions. 6.1.4. Getting notified of connection failure JMS provides a standard mechanism for getting notified asynchronously of connection failure: java.jms.ExceptionListener . Any ExceptionListener or SessionFailureListener instance is always called by the broker if a connection failure occurs, whether the connection was successfully failed over, reconnected, or reattached. You can find out if a reconnect or a reattach has happened by examining the failedOver flag passed in on the connectionFailed on SessionFailureListener . Alternatively, you can inspect the error code of the javax.jms.JMSException , which can be one of the following: Table 6.1. JMSException error codes Error code Description FAILOVER Failover has occurred and the broker has successfully reattached or reconnected DISCONNECT No failover has occurred and the broker is disconnected 6.2. Application-level failover In some cases you might not want automatic client failover, but prefer to code your own reconnection logic in a failure handler instead. This is known as application-level failover, since the failover is handled at the application level. To implement application-level failover when using JMS, set an ExceptionListener class on the JMS connection. The ExceptionListener is called by the broker in the event that a connection failure is detected. In your ExceptionListener , you should close your old JMS connections. You might also want to look up new connection factory instances from JNDI and create new connections. 6.3. Detecting dead connections As long as the it is receiving data from the broker, the client considers a connection to be alive. Configure the client to check its connection for failure by providing a value for the client-failure-check-period property. The default check period for a network connection is 30,000 milliseconds, or 30 seconds, while the default value for an in-VM connection is -1, which means the client never fails the connection from its side if no data is received. Typically, you set the check period to be much lower than the value used for the broker's connection time-to-live, which ensures that clients can reconnect in case of a temporary failure. Setting the check period for detecting dead connections The examples below show how to set the check period to 10,000 milliseconds. Procedure If you are using JNDI, set the check period within the JNDI context environment, jndi.properties , for example, as below. If you are not using JNDI, set the check period directly by passing a value to ActiveMQConnectionFactory.setClientFailureCheckPeriod() . ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setClientFailureCheckPeriod(10000); 6.4. Configuring time-to-live By default clients can set a time-to-live (TTL) for their own connections. The examples below show you how to set the TTL. Procedure If you are using JNDI to instantiate your connection factory, you can specify it in the xml config, using the parameter connectionTtl . If you are not using JNDI, the connection TTL is defined by the ConnectionTTL attribute on a ActiveMQConnectionFactory instance. ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConnectionTTL(30000); 6.5. Closing connections A client application must close its resources in a controlled manner before it exits to prevent dead connections from occurring. In Java, it is recommended to close connections inside a finally block: Connection jmsConnection = null; try { ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(...); jmsConnection = jmsConnectionFactory.createConnection(); ...use the connection... } finally { if (jmsConnection != null) { jmsConnection.close(); } } 6.6. Configuring dynamic discovery You can configure AMQ Core Protocol JMS to discover a list of brokers when attempting to establish a connection. If you are using JNDI on the client to look up your JMS connection factory instances, you can specify these parameters in the JNDI context environment. Typically the parameters are defined in a file named jndi.properties . The host and part in the URI for the connection factory should match the group-address and group-port from the corresponding broadcast-group inside broker's broker.xml configuration file. Below is an example of a jndi.properties file configured to connect to a broker's discovery group. When this connection factory is downloaded from JNDI by a client application and JMS connections are created from it, those connections will be load-balanced across the list of servers that the discovery group maintains by listening on the multicast address specified in the broker's discovery group configuration. As an alternative to using JNDI, you can use specify the discovery group parameters directly in your Java code when creating the JMS connection factory. The code below provides an example of how to do this. final String groupAddress = "231.7.7.7"; final int groupPort = 9876; DiscoveryGroupConfiguration discoveryGroupConfiguration = new DiscoveryGroupConfiguration(); UDPBroadcastEndpointFactory udpBroadcastEndpointFactory = new UDPBroadcastEndpointFactory(); udpBroadcastEndpointFactory.setGroupAddress(groupAddress).setGroupPort(groupPort); discoveryGroupConfiguration.setBroadcastEndpointFactory(udpBroadcastEndpointFactory); ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithHA (discoveryGroupConfiguration, JMSFactoryType.CF); Connection jmsConnection1 = jmsConnectionFactory.createConnection(); Connection jmsConnection2 = jmsConnectionFactory.createConnection(); The refresh timeout can be set directly on the DiscoveryGroupConfiguration by using the setter method setRefreshTimeout() . The default value is 10000 milliseconds. On first usage, the connection factory will make sure it waits this long since creation before creating the first connection. The default wait time is 10000 milliseconds, but you can change it by passing a new value to DiscoveryGroupConfiguration.setDiscoveryInitialWaitTimeout() . 6.7. Configuring static discovery Sometimes it may be impossible to use UDP on the network you are using. In this case you can configure a connection with an initial list of possible servers. The list can be just one broker that you know will always be available, or a list of brokers where at least one will be available. This does not mean that you have to know where all your servers are going to be hosted. You can configure these servers to use the reliable servers to connect to. After they are connected, their connection details will be propagated from the server to the client. If you are using JNDI on the client to look up your JMS connection factory instances, you can specify these parameters in the JNDI context environment. Typically the parameters are defined in a file named jndi.properties . Below is an example jndi.properties file that provides a static list of brokers instead of using dynamic discovery. When the above connection factory is used by a client, its connections will be load-balanced across the list of brokers defined within the parentheses () . If you are instantiating the JMS connection factory directly, you can specify the connector list explicitly when creating the JMS connection factory, as in the example below. HashMap<String, Object> map = new HashMap<String, Object>(); map.put("host", "myhost"); map.put("port", "61616"); TransportConfiguration broker1 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map); HashMap<String, Object> map2 = new HashMap<String, Object>(); map2.put("host", "myhost2"); map2.put("port", "61617"); TransportConfiguration broker2 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map2); ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithHA (JMSFactoryType.CF, broker1, broker2); 6.8. Configuring a broker connector Connectors define how clients can connect to the broker. You can configure them from the client using the JMS connection factory. Map<String, Object> connectionParams = new HashMap<String, Object>(); connectionParams.put(org.apache.activemq.artemis.core.remoting.impl.netty.TransportConstants.PORT_PROP_NAME, 61617); TransportConfiguration transportConfiguration = new TransportConfiguration( "org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory", connectionParams); ConnectionFactory connectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration); Connection jmsConnection = connectionFactory.createConnection(); | [
"connectionFactory.ConnectionFactory=tcp://localhost:61616?ha=true&reconnectAttempts=3",
"connectionFactory.ConnectionFactory=(tcp://host1:port,tcp://host2:port)?ha=true&reconnectAttempts=3",
"ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setInitialConnectAttempts(3);",
"<configuration> <core> <initial-connect-attempts>3</initial-connect-attempts> 1 </core> </configuration>",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?clientFailureCheckPeriod=10000",
"ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setClientFailureCheckPeriod(10000);",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=tcp://localhost:61616?connectionTtl=30000",
"ConnectionFactory cf = ActiveMQJMSClient.createConnectionFactory(...) cf.setConnectionTTL(30000);",
"Connection jmsConnection = null; try { ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(...); jmsConnection = jmsConnectionFactory.createConnection(); ...use the connection } finally { if (jmsConnection != null) { jmsConnection.close(); } }",
"java.naming.factory.initial = ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=udp://231.7.7.7:9876",
"final String groupAddress = \"231.7.7.7\"; final int groupPort = 9876; DiscoveryGroupConfiguration discoveryGroupConfiguration = new DiscoveryGroupConfiguration(); UDPBroadcastEndpointFactory udpBroadcastEndpointFactory = new UDPBroadcastEndpointFactory(); udpBroadcastEndpointFactory.setGroupAddress(groupAddress).setGroupPort(groupPort); discoveryGroupConfiguration.setBroadcastEndpointFactory(udpBroadcastEndpointFactory); ConnectionFactory jmsConnectionFactory = ActiveMQJMSClient.createConnectionFactoryWithHA (discoveryGroupConfiguration, JMSFactoryType.CF); Connection jmsConnection1 = jmsConnectionFactory.createConnection(); Connection jmsConnection2 = jmsConnectionFactory.createConnection();",
"java.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory connectionFactory.myConnectionFactory=(tcp://myhost:61616,tcp://myhost2:61616)",
"HashMap<String, Object> map = new HashMap<String, Object>(); map.put(\"host\", \"myhost\"); map.put(\"port\", \"61616\"); TransportConfiguration broker1 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map); HashMap<String, Object> map2 = new HashMap<String, Object>(); map2.put(\"host\", \"myhost2\"); map2.put(\"port\", \"61617\"); TransportConfiguration broker2 = new TransportConfiguration (NettyConnectorFactory.class.getName(), map2); ActiveMQConnectionFactory cf = ActiveMQJMSClient.createConnectionFactoryWithHA (JMSFactoryType.CF, broker1, broker2);",
"Map<String, Object> connectionParams = new HashMap<String, Object>(); connectionParams.put(org.apache.activemq.artemis.core.remoting.impl.netty.TransportConstants.PORT_PROP_NAME, 61617); TransportConfiguration transportConfiguration = new TransportConfiguration( \"org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory\", connectionParams); ConnectionFactory connectionFactory = ActiveMQJMSClient.createConnectionFactoryWithoutHA(JMSFactoryType.CF, transportConfiguration); Connection jmsConnection = connectionFactory.createConnection();"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_core_protocol_jms_client/network_connections |
Part IV. Administration: Managing Identities | Part IV. Administration: Managing Identities This part details how to manage user accounts, hosts, as well as user groups and host groups. In addition, it details how to assign and view unique UID and GID numbers and how user and group schema works. The following chapter deals with managing services and delegating access to hosts and services. The final chapters provide instruction on how to define Access Control for Identity Management users, how to manage Kerberos flags and principal aliases, and how to integrate with NIS domains and Netgroups . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/p.part-administration-guide-identities |
3.5. Listing Hosts | 3.5. Listing Hosts This Ruby example lists the hosts. # Get the reference to the root of the services tree: system_service = connection.system_service # Get the reference to the service that manages the # collection of hosts: host_service = system_service.hosts_service # Retrieve the list of hosts and for each one # print its name: host = host_service.list host.each do |host| puts host.name end In an environment with only one attached host ( Atlantic ) the example outputs: For more information, see HostsService:list-instance_method . | [
"Get the reference to the root of the services tree: system_service = connection.system_service Get the reference to the service that manages the collection of hosts: host_service = system_service.hosts_service Retrieve the list of hosts and for each one print its name: host = host_service.list host.each do |host| puts host.name end",
"Atlantic"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/ruby_sdk_guide/listing_hosts |
Chapter 5. Configuring VLAN tagging | Chapter 5. Configuring VLAN tagging A Virtual Local Area Network (VLAN) is a logical network within a physical network. The VLAN interface tags packets with the VLAN ID as they pass through the interface, and removes tags of returning packets. You create VLAN interfaces on top of another interface, such as Ethernet, bond, team, or bridge devices. These interfaces are called the parent interface . Red Hat Enterprise Linux provides administrators different options to configure VLAN devices. For example: Use nmcli to configure VLAN tagging using the command line. Use the RHEL web console to configure VLAN tagging using a web browser. Use nmtui to configure VLAN tagging in a text-based user interface. Use the nm-connection-editor application to configure connections in a graphical interface. Use nmstatectl to configure connections through the Nmstate API. Use RHEL system roles to automate the VLAN configuration on one or multiple hosts. 5.1. Configuring VLAN tagging by using nmcli You can configure Virtual Local Area Network (VLAN) tagging on the command line using the nmcli utility. Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the incorrect source MAC address. The bond is usually not expected to get IP addresses from a DHCP server or IPv6 auto-configuration. Ensure it by setting the ipv4.method=disable and ipv6.method=ignore options while creating the bond. Otherwise, if DHCP or IPv6 auto-configuration fails after some time, the interface might be brought down. The switch, the host is connected to, is configured to support VLAN tags. For details, see the documentation of your switch. Procedure Display the network interfaces: Create the VLAN interface. For example, to create a VLAN interface named vlan10 that uses enp1s0 as its parent interface and that tags packets with VLAN ID 10 , enter: Note that the VLAN must be within the range from 0 to 4094 . By default, the VLAN connection inherits the maximum transmission unit (MTU) from the parent interface. Optionally, set a different MTU value: Configure the IPv4 settings: If you plan to use this VLAN device as a port of other devices, enter: To use DHCP, no action is required. To set a static IPv4 address, network mask, default gateway, and DNS server to the vlan10 connection, enter: Configure the IPv6 settings: If you plan to use this VLAN device as a port of other devices, enter: To use stateless address autoconfiguration (SLAAC), no action is required. To set a static IPv6 address, network mask, default gateway, and DNS server to the vlan10 connection, enter: Activate the connection: Verification Verify the settings: Additional resources nm-settings(5) man page on your system 5.2. Configuring VLAN tagging by using the RHEL web console You can configure VLAN tagging if you prefer to manage network settings using a web browser-based interface in the RHEL web console. Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the incorrect source MAC address. The bond is usually not expected to get IP addresses from a DHCP server or IPv6 auto-configuration. Ensure it by disabling the IPv4 and IPv6 protocol creating the bond. Otherwise, if DHCP or IPv6 auto-configuration fails after some time, the interface might be brought down. The switch, the host is connected to, is configured to support VLAN tags. For details, see the documentation of your switch. You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Select the Networking tab in the navigation on the left side of the screen. Click Add VLAN in the Interfaces section. Select the parent device. Enter the VLAN ID. Enter the name of the VLAN device or keep the automatically-generated name. Click Apply . By default, the VLAN device uses a dynamic IP address. If you want to set a static IP address: Click the name of the VLAN device in the Interfaces section. Click Edit to the protocol you want to configure. Select Manual to Addresses , and enter the IP address, prefix, and default gateway. In the DNS section, click the + button, and enter the IP address of the DNS server. Repeat this step to set multiple DNS servers. In the DNS search domains section, click the + button, and enter the search domain. If the interface requires static routes, configure them in the Routes section. Click Apply Verification Select the Networking tab in the navigation on the left side of the screen, and check if there is incoming and outgoing traffic on the interface: 5.3. Configuring VLAN tagging by using nmtui The nmtui application provides a text-based user interface for NetworkManager. You can use nmtui to configure VLAN tagging on a host without a graphical interface. Note In nmtui : Navigate by using the cursor keys. Press a button by selecting it and hitting Enter . Select and clear checkboxes by using Space . To return to the screen, use ESC . Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the then incorrect source MAC address. The bond is usually not expected to get IP addresses from a DHCP server or IPv6 auto-configuration. Ensure it by setting the ipv4.method=disable and ipv6.method=ignore options while creating the bond. Otherwise, if DHCP or IPv6 auto-configuration fails after some time, the interface might be brought down. The switch the host is connected to is configured to support VLAN tags. For details, see the documentation of your switch. Procedure If you do not know the network device name on which you want configure VLAN tagging, display the available devices: Start nmtui : Select Edit a connection , and press Enter . Press Add . Select VLAN from the list of network types, and press Enter . Optional: Enter a name for the NetworkManager profile to be created. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Enter the VLAN device name to be created into the Device field. Enter the name of the device on which you want to configure VLAN tagging into the Parent field. Enter the VLAN ID. The ID must be within the range from 0 to 4094 . Depending on your environment, configure the IP address settings in the IPv4 configuration and IPv6 configuration areas accordingly. For this, press the button to these areas, and select: Disabled , if this VLAN device does not require an IP address or you want to use it as a port of other devices. Automatic , if a DHCP server or stateless address autoconfiguration (SLAAC) dynamically assigns an IP address to the VLAN device. Manual , if the network requires static IP address settings. In this case, you must fill further fields: Press Show to the protocol you want to configure to display additional fields. Press Add to Addresses , and enter the IP address and the subnet mask in Classless Inter-Domain Routing (CIDR) format. If you do not specify a subnet mask, NetworkManager sets a /32 subnet mask for IPv4 addresses and /64 for IPv6 addresses. Enter the address of the default gateway. Press Add to DNS servers , and enter the DNS server address. Press Add to Search domains , and enter the DNS search domain. Figure 5.1. Example of a VLAN connection with static IP address settings Press OK to create and automatically activate the new connection. Press Back to return to the main menu. Select Quit , and press Enter to close the nmtui application. Verification Verify the settings: 5.4. Configuring VLAN tagging by using nm-connection-editor You can configure Virtual Local Area Network (VLAN) tagging in a graphical interface using the nm-connection-editor application. Prerequisites The interface you plan to use as a parent to the virtual VLAN interface supports VLAN tags. If you configure the VLAN on top of a bond interface: The ports of the bond are up. The bond is not configured with the fail_over_mac=follow option. A VLAN virtual device cannot change its MAC address to match the parent's new MAC address. In such a case, the traffic would still be sent with the incorrect source MAC address. The switch, the host is connected, to is configured to support VLAN tags. For details, see the documentation of your switch. Procedure Open a terminal, and enter nm-connection-editor : Click the + button to add a new connection. Select the VLAN connection type, and click Create . On the VLAN tab: Select the parent interface. Select the VLAN id. Note that the VLAN must be within the range from 0 to 4094 . By default, the VLAN connection inherits the maximum transmission unit (MTU) from the parent interface. Optionally, set a different MTU value. Optional: Set the name of the VLAN interface and further VLAN-specific options. Configure the IP address settings on both the IPv4 Settings and IPv6 Settings tabs: If you plan to use this bridge device as a port of other devices, set the Method field to Disabled . To use DHCP, leave the Method field at its default, Automatic (DHCP) . To use static IP settings, set the Method field to Manual and fill the fields accordingly: Click Save . Close nm-connection-editor . Verification Verify the settings: Additional resources Configuring NetworkManager to avoid using a specific profile to provide a default gateway 5.5. Configuring VLAN tagging by using nmstatectl Use the nmstatectl utility to configure Virtual Local Area Network VLAN through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Depending on your environment, adjust the YAML file accordingly. For example, to use different devices than Ethernet adapters in the VLAN, adapt the base-iface attribute and type attributes of the ports you use in the VLAN. Prerequisites To use Ethernet devices as ports in the VLAN, the physical or virtual Ethernet devices must be installed on the server. The nmstate package is installed. Procedure Create a YAML file, for example ~/create-vlan.yml , with the following content: --- interfaces: - name: vlan10 type: vlan state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false vlan: base-iface: enp1s0 id: 10 - name: enp1s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 -hop-address: 192.0.2.254 -hop-interface: vlan10 - destination: ::/0 -hop-address: 2001:db8:1::fffe -hop-interface: vlan10 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb These settings define a VLAN with ID 10 that uses the enp1s0 device. As the child device, the VLAN connection has the following settings: A static IPv4 address - 192.0.2.1 with the /24 subnet mask A static IPv6 address - 2001:db8:1::1 with the /64 subnet mask An IPv4 default gateway - 192.0.2.254 An IPv6 default gateway - 2001:db8:1::fffe An IPv4 DNS server - 192.0.2.200 An IPv6 DNS server - 2001:db8:1::ffbb A DNS search domain - example.com Apply the settings to the system: Verification Display the status of the devices and connections: Display all settings of the connection profile: Display the connection settings in YAML format: Additional resources nmstatectl(8) man page on your system /usr/share/doc/nmstate/examples/ directory 5.6. Configuring VLAN tagging by using the network RHEL system role If your network uses Virtual Local Area Networks (VLANs) to separate network traffic into logical networks, create a NetworkManager connection profile to configure VLAN tagging. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure VLAN tagging and, if a connection profile for the VLAN's parent device does not exist, the role can create it as well. Note If the VLAN device requires an IP address, default gateway, and DNS settings, configure them on the VLAN device and not on the parent device. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up The settings specified in the example playbook include the following: type: <profile_type> Sets the type of the profile to create. The example playbook creates two connection profiles: One for the parent Ethernet device and one for the VLAN device. dhcp4: <value> If set to yes , automatic IPv4 address assignment from DHCP, PPP, or similar services is enabled. Disable the IP address configuration on the parent device. auto6: <value> If set to yes , IPv6 auto-configuration is enabled. In this case, by default, NetworkManager uses Router Advertisements and, if the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. Disable the IP address configuration on the parent device. parent: <parent_device> Sets the parent device of the VLAN connection profile. In the example, the parent is the Ethernet interface. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Verify the VLAN settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory | [
"nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet disconnected enp1s0 bridge0 bridge connected bridge0 bond0 bond connected bond0",
"nmcli connection add type vlan con-name vlan10 ifname vlan10 vlan.parent enp1s0 vlan.id 10",
"nmcli connection modify vlan10 ethernet.mtu 2000",
"nmcli connection modify vlan10 ipv4.method disabled",
"nmcli connection modify vlan10 ipv4.addresses '192.0.2.1/24' ipv4.gateway '192.0.2.254' ipv4.dns '192.0.2.253' ipv4.method manual",
"nmcli connection modify vlan10 ipv6.method disabled",
"nmcli connection modify vlan10 ipv6.addresses '2001:db8:1::1/32' ipv6.gateway '2001:db8:1::fffe' ipv6.dns '2001:db8:1::fffd' ipv6.method manual",
"nmcli connection up vlan10",
"ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever",
"nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --",
"nmtui",
"ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever",
"nm-connection-editor",
"ip -d addr show vlan10 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:d5:e0:fb brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute vlan10 valid_lft forever preferred_lft forever inet6 2001:db8:1::1/32 scope global noprefixroute valid_lft forever preferred_lft forever inet6 fe80::8dd7:9030:6f8e:89e6/64 scope link noprefixroute valid_lft forever preferred_lft forever",
"--- interfaces: - name: vlan10 type: vlan state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false vlan: base-iface: enp1s0 id: 10 - name: enp1s0 type: ethernet state: up routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: vlan10 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: vlan10 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb",
"nmstatectl apply ~/create-vlan.yml",
"nmcli device status DEVICE TYPE STATE CONNECTION vlan10 vlan connected vlan10",
"nmcli connection show vlan10 connection.id: vlan10 connection.uuid: 1722970f-788e-4f81-bd7d-a86bf21c9df5 connection.stable-id: -- connection.type: vlan connection.interface-name: vlan10",
"nmstatectl show vlan0",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: VLAN connection profile with Ethernet port ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: # Ethernet profile - name: enp1s0 type: ethernet interface_name: enp1s0 autoconnect: yes state: up ip: dhcp4: no auto6: no # VLAN profile - name: enp1s0.10 type: vlan vlan: id: 10 ip: dhcp4: yes auto6: yes parent: enp1s0 state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m command -a 'ip -d addr show enp1s0.10' managed-node-01.example.com | CHANGED | rc=0 >> 4: vlan10@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:72:2f:6e brd ff:ff:ff:ff:ff:ff promiscuity 0 vlan protocol 802.1Q id 10 <REORDER_HDR> numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_networking/configuring-vlan-tagging_configuring-and-managing-networking |
6.4. RHEA-2013:0369 - new packages: pcs | 6.4. RHEA-2013:0369 - new packages: pcs New pcs packages are now available for Red Hat Enterprise Linux 6. The pcs packages provide a command-line tool and graphical web interface to configure and manage pacemaker and corosync. This enhancement update adds the pcs package as a Technology Preview. (BZ# 657370 ) More information about Red Hat Technology Previews is available here: https://access.redhat.com/support/offerings/techpreview/ All users who want to use the pcs Technology Preview are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/rhea-2013-0369 |
function::cputime_to_string | function::cputime_to_string Name function::cputime_to_string - Human readable string for given cputime Synopsis Arguments cputime Time to translate. Description Equivalent to calling: msec_to_string (cputime_to_msecs (cputime). | [
"function cputime_to_string:string(cputime:long)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-cputime-to-string |
Part V. Red Hat JBoss Data Grid Quickstarts | Part V. Red Hat JBoss Data Grid Quickstarts The following lists the quickstarts included in this document and provides information about which container and mode they are used in: Table 14. Quickstarts Information Quickstart Name Container JBoss Data Grid Mode Link to Details Hello World JBoss EAP Library mode Chapter 13, The Hello World Quickstart Carmart Non-Transactional JBoss EAP and JBoss Enterprise Web Server Library mode Section 14.3, "The (Non-transactional) CarMart Quickstart Using JBoss EAP" Carmart Non-Transactional JBoss EAP and JBoss Enterprise Web Server Remote Client-Server mode Section 14.5, "The (Non-transactional) CarMart Quickstart in Remote Client-Server Mode (JBoss EAP)" and Section 14.6, "The (Non-Transactional) CarMart Quickstart in Remote Client-Server Mode (JBoss Enterprise Web Server)" Carmart Transactional JBoss EAP and JBoss Enterprise Web Server Library mode Section 14.7, "The (Transactional) CarMart Quickstart Using JBoss EAP" and Section 14.8, "The (Transactional) CarMart Quickstart Using JBoss Enterprise Web Server" Football Application No container Remote Client-Server mode Chapter 15, The Football Quickstart Endpoint Examples Rapid Stock Market No container Remote Client-Server mode Chapter 16, The Rapid Stock Market Quickstart Cluster App JBoss EAP Library mode Chapter 17, The Cluster App Quickstart camel-jbossdatagrid-fuse JBoss Fuse Library mode Chapter 18, The camel-jbossdatagrid-fuse Quickstart Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/part-red_hat_jboss_data_grid_quickstarts |
Chapter 33. Configuring an FCoE Interface to Automatically Mount at Boot | Chapter 33. Configuring an FCoE Interface to Automatically Mount at Boot Note The instructions in this section are available in /usr/share/doc/fcoe-utils- version /README as of Red Hat Enterprise Linux 6.1. Refer to that document for any possible changes throughout minor releases. You can mount newly discovered disks via udev rules, autofs , and other similar methods. Sometimes, however, a specific service might require the FCoE disk to be mounted at boot-time. In such cases, the FCoE disk should be mounted as soon as the fcoe service runs and before the initiation of any service that requires the FCoE disk. To configure an FCoE disk to automatically mount at boot, add proper FCoE mounting code to the startup script for the fcoe service. The fcoe startup script is /etc/init.d/fcoe . The FCoE mounting code is different per system configuration, whether you are using a simple formatted FCoE disk, LVM, or multipathed device node. Example 33.1. FCoE mounting code The following is a sample FCoE mounting code for mounting file systems specified via wild cards in /etc/fstab : The mount_fcoe_disks_from_fstab function should be invoked after the fcoe service script starts the fcoemon daemon. This will mount FCoE disks specified by the following paths in /etc/fstab : Entries with fc- and _netdev sub-strings enable the mount_fcoe_disks_from_fstab function to identify FCoE disk mount entries. For more information on /etc/fstab entries, refer to man 5 fstab . Note The fcoe service does not implement a timeout for FCoE disk discovery. As such, the FCoE mounting code should implement its own timeout period. | [
"mount_fcoe_disks_from_fstab() { local timeout=20 local done=1 local fcoe_disks=(USD(egrep 'by-path\\/fc-.*_netdev' /etc/fstab | cut -d ' ' -f1)) test -z USDfcoe_disks && return 0 echo -n \"Waiting for fcoe disks . \" while [ USDtimeout -gt 0 ]; do for disk in USD{fcoe_disks[*]}; do if ! test -b USDdisk; then done=0 break fi done test USDdone -eq 1 && break; sleep 1 echo -n \". \" done=1 let timeout-- done if test USDtimeout -eq 0; then echo \"timeout!\" else echo \"done!\" fi # mount any newly discovered disk mount -a 2>/dev/null }",
"/dev/disk/by-path/fc-0xXX:0xXX /mnt/fcoe-disk1 ext3 defaults,_netdev 0 0 /dev/disk/by-path/fc-0xYY:0xYY /mnt/fcoe-disk2 ext3 defaults,_netdev 0 0"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/fcoe-config-automount |
Chapter 6. Installing a three-node cluster on OpenStack | Chapter 6. Installing a three-node cluster on OpenStack In OpenShift Container Platform version 4.18, you can install a three-node cluster on Red Hat OpenStack Platform (RHOSP). A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. You can install a three-node cluster on installer-provisioned infrastructure only. 6.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... 6.2. steps Installing a cluster on OpenStack with customizations | [
"apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_openstack/installing-openstack-three-node |
Chapter 18. Configuring allowed-address-pairs | Chapter 18. Configuring allowed-address-pairs 18.1. Overview of allowed-address-pairs Use allowed-address-pairs to specify mac_address/ip_address (CIDR) pairs that pass through a port regardless of subnet. This enables the use of protocols such as VRRP, which floats an IP address between two instances to enable fast data plane failover. Note The allowed-address-pairs extension is currently supported only by the ML2 and Open vSwitch plug-ins. 18.2. Creating a port and allowing one address pair Use the following command to create a port and allow one address pair: 18.3. Adding allowed-address-pairs Use the following command to add allowed address pairs: Note You cannot set an allowed-address pair that matches the mac_address and ip_address of a port. This is because such a setting has no effect since traffic matching the mac_address and ip_address is already allowed to pass through the port. | [
"openstack port create --network net1 --allowed-address mac_address=<mac_address>,ip_address=<ip_cidr> PORT_NAME",
"openstack port set <port-uuid> --allowed-address mac_address=<mac_address>,ip_address=<ip_cidr>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/sec-allowed-address-pairs |
Chapter 1. Overview of deploying in external mode | Chapter 1. Overview of deploying in external mode Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster or use IBM FlashSystems available for consumption through OpenShift Container Platform clusters running on the following platforms: VMware vSphere Bare metal Red Hat OpenStack platform (Technology preview) See Planning your deployment for more information. For instructions regarding how to install a RHCS 4 cluster, see Installation guide . Follow these steps to deploy OpenShift Data Foundation in external mode: If you use Red Hat Enterprise Linux hosts for worker nodes, Enable file system access for containers . Skip this step if you use Red Hat Enterprise Linux CoreOS (RHCOS) hosts. Deploy one of the following: Deploy OpenShift Data Foudation using Red Hat Ceph Storage . Deploy OpenShift Data Foudation using IBM FlashSystem . Regional-DR requirements [Developer Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites in order to successfully implement a Disaster Recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription For detailed requirements, see Regional-DR requirements and RHACM requirements . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_in_external_mode/overview-of-deploying-in-external-mode_rhodf |
Chapter 15. Replacing storage nodes | Chapter 15. Replacing storage nodes You can choose one of the following procedures to replace storage nodes: Section 15.1, "Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" Section 15.2, "Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure" 15.1. Replacing operational nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the node that you need to replace. Take a note of its Machine Name . Mark the node as unschedulable: <node_name> Specify the name of node that you need to replace. Drain the node: Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when when you label the new node, and it is functional. Click Compute Machines . Search for the required machine. Besides the required machine, click Action menu (...) Delete Machine . Click Delete to confirm that the machine is deleted. A new machine is automatically created. Wait for the new machine to start and transition into Running state. Important This activity might take at least 5 - 10 minutes or more. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . 15.2. Replacing failed nodes on Red Hat OpenStack Platform installer-provisioned infrastructure Procedure Log in to the OpenShift Web Console, and click Compute Nodes . Identify the faulty node, and click on its Machine Name . Click Actions Edit Annotations , and click Add More . Add machine.openshift.io/exclude-node-draining , and click Save . Click Actions Delete Machine , and click Delete . A new machine is automatically created, wait for new machine to start. Important This activity might take at least 5 - 10 minutes or more. Ceph errors generated during this period are temporary and are automatically resolved when you label the new node, and it is functional. Click Compute Nodes . Confirm that the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From the user interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From the command-line interface Apply the OpenShift Data Foundation label to the new node: <new_node_name> Specify the name of the new node. Optional: If the failed Red Hat OpenStack Platform instance is not removed automatically, terminate the instance from Red Hat OpenStack Platform console. Verification steps Verify that the new node is present in the output: Click Workloads Pods . Confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* Verify that all the other required OpenShift Data Foundation pods are in Running state. Verify that the new Object Storage Device (OSD) pods are running on the replacement node: Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. For each of the new nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the one or more selected hosts: Display the list of available block devices: Check for the crypt keyword beside the one or more ocs-deviceset names. If the verification steps fail, contact Red Hat Support . | [
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force --delete-emptydir-data=true --ignore-daemonsets",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get pods -o wide -n openshift-storage| egrep -i <new_node_name> | egrep osd",
"oc debug node/ <node_name>",
"chroot /host",
"lsblk"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/replacing_storage_nodes |
Chapter 10. Monitoring project and application metrics using the Developer perspective | Chapter 10. Monitoring project and application metrics using the Developer perspective The Observe view in the Developer perspective provides options to monitor your project or application metrics, such as CPU, memory, and bandwidth usage, and network related information. 10.1. Prerequisites You have created and deployed applications on OpenShift Container Platform . You have logged in to the web console and have switched to the Developer perspective . 10.2. Monitoring your project metrics After you create applications in your project and deploy them, you can use the Developer perspective in the web console to see the metrics for your project. Procedure Go to Observe to see the Dashboard , Metrics , Alerts , and Events for your project. Optional: Use the Dashboard tab to see graphs depicting the following application metrics: CPU usage Memory usage Bandwidth consumption Network-related information such as the rate of transmitted and received packets and the rate of dropped packets. In the Dashboard tab, you can access the Kubernetes compute resources dashboards. Note In the Dashboard list, the Kubernetes / Compute Resources / Namespace (Pods) dashboard is selected by default. Use the following options to see further details: Select a dashboard from the Dashboard list to see the filtered metrics. All dashboards produce additional sub-menus when selected, except Kubernetes / Compute Resources / Namespace (Pods) . Select an option from the Time Range list to determine the time frame for the data being captured. Set a custom time range by selecting Custom time range from the Time Range list. You can input or select the From and To dates and times. Click Save to save the custom time range. Select an option from the Refresh Interval list to determine the time period after which the data is refreshed. Hover your cursor over the graphs to see specific details for your pod. Click Inspect located in the upper-right corner of every graph to see any particular graph details. The graph details appear in the Metrics tab. Optional: Use the Metrics tab to query for the required project metric. Figure 10.1. Monitoring metrics In the Select Query list, select an option to filter the required details for your project. The filtered metrics for all the application pods in your project are displayed in the graph. The pods in your project are also listed below. From the list of pods, clear the colored square boxes to remove the metrics for specific pods to further filter your query result. Click Show PromQL to see the Prometheus query. You can further modify this query with the help of prompts to customize the query and filter the metrics you want to see for that namespace. Use the drop-down list to set a time range for the data being displayed. You can click Reset Zoom to reset it to the default time range. Optional: In the Select Query list, select Custom Query to create a custom Prometheus query and filter relevant metrics. Optional: Use the Alerts tab to do the following tasks: See the rules that trigger alerts for the applications in your project. Identify the alerts firing in the project. Silence such alerts if required. Figure 10.2. Monitoring alerts Use the following options to see further details: Use the Filter list to filter the alerts by their Alert State and Severity . Click on an alert to go to the details page for that alert. In the Alerts Details page, you can click View Metrics to see the metrics for the alert. Use the Notifications toggle adjoining an alert rule to silence all the alerts for that rule, and then select the duration for which the alerts will be silenced from the Silence for list. You must have the permissions to edit alerts to see the Notifications toggle. Use the Options menu adjoining an alert rule to see the details of the alerting rule. Optional: Use the Events tab to see the events for your project. Figure 10.3. Monitoring events You can filter the displayed events using the following options: In the Resources list, select a resource to see events for that resource. In the All Types list, select a type of event to see events relevant to that type. Search for specific events using the Filter events by names or messages field. 10.3. Monitoring your application metrics After you create applications in your project and deploy them, you can use the Topology view in the Developer perspective to see the alerts and metrics for your application. Critical and warning alerts for your application are indicated on the workload node in the Topology view. Procedure To see the alerts for your workload: In the Topology view, click the workload to see the workload details in the right panel. Click the Observe tab to see the critical and warning alerts for the application; graphs for metrics, such as CPU, memory, and bandwidth usage; and all the events for the application. Note Only critical and warning alerts in the Firing state are displayed in the Topology view. Alerts in the Silenced , Pending and Not Firing states are not displayed. Figure 10.4. Monitoring application metrics Click the alert listed in the right panel to see the alert details in the Alert Details page. Click any of the charts to go to the Metrics tab to see the detailed metrics for the application. Click View monitoring dashboard to see the monitoring dashboard for that application. 10.4. Image vulnerabilities breakdown In the Developer perspective, the project dashboard shows the Image Vulnerabilities link in the Status section. Using this link, you can view the Image Vulnerabilities breakdown window, which includes details regarding vulnerable container images and fixable container images. The icon color indicates severity: Red: High priority. Fix immediately. Orange: Medium priority. Can be fixed after high-priority vulnerabilities. Yellow: Low priority. Can be fixed after high and medium-priority vulnerabilities. Based on the severity level, you can prioritize vulnerabilities and fix them in an organized manner. Figure 10.5. Viewing image vulnerabilities 10.5. Monitoring your application and image vulnerabilities metrics After you create applications in your project and deploy them, use the Developer perspective in the web console to see the metrics for your application dependency vulnerabilities across your cluster. The metrics help you to analyze the following image vulnerabilities in detail: Total count of vulnerable images in a selected project Severity-based counts of all vulnerable images in a selected project Drilldown into severity to obtain the details, such as count of vulnerabilities, count of fixable vulnerabilities, and number of affected pods for each vulnerable image Prerequisites You have installed the Red Hat Quay Container Security operator from the Operator Hub. Note The Red Hat Quay Container Security operator detects vulnerabilities by scanning the images that are in the quay registry. Procedure For a general overview of the image vulnerabilities, on the navigation panel of the Developer perspective, click Project to see the project dashboard. Click Image Vulnerabilities in the Status section. The window that opens displays details such as Vulnerable Container Images and Fixable Container Images . For a detailed vulnerabilities overview, click the Vulnerabilities tab on the project dashboard. To get more detail about an image, click its name. View the default graph with all types of vulnerabilities in the Details tab. Optional: Click the toggle button to view a specific type of vulnerability. For example, click App dependency to see vulnerabilities specific to application dependency. Optional: You can filter the list of vulnerabilities based on their Severity and Type or sort them by Severity , Package , Type , Source , Current Version , and Fixed in Version . Click a Vulnerability to get its associated details: Base image vulnerabilities display information from a Red Hat Security Advisory (RHSA). App dependency vulnerabilities display information from the Snyk security application. 10.6. Additional resources About OpenShift Container Platform monitoring | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/building_applications/odc-monitoring-project-and-application-metrics-using-developer-perspective |
Appendix A. Encryption Standards | Appendix A. Encryption Standards A.1. Synchronous Encryption A.1.1. Advanced Encryption Standard - AES In cryptography, the Advanced Encryption Standard (AES) is an encryption standard adopted by the U.S. Government. The standard comprises three block ciphers, AES-128, AES-192 and AES-256, adopted from a larger collection originally published as Rijndael. Each AES cipher has a 128-bit block size, with key sizes of 128, 192 and 256 bits, respectively. The AES ciphers have been analyzed extensively and are now used worldwide, as was the case with its predecessor, the Data Encryption Standard (DES). [5] A.1.1.1. AES History AES was announced by National Institute of Standards and Technology (NIST) as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001 after a 5-year standardization process. Fifteen competing designs were presented and evaluated before Rijndael was selected as the most suitable. It became effective as a standard May 26, 2002. It is available in many different encryption packages. AES is the first publicly accessible and open cipher approved by the NSA for top secret information. The Rijndael cipher was developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, and submitted by them to the AES selection process. Rijndael is a portmanteau of the names of the two inventors. [6] A.1.2. Data Encryption Standard - DES The Data Encryption Standard (DES) is a block cipher (a form of shared secret encryption) that was selected by the National Bureau of Standards as an official Federal Information Processing Standard (FIPS) for the United States in 1976 and which has subsequently enjoyed widespread use internationally. It is based on a symmetric-key algorithm that uses a 56-bit key. The algorithm was initially controversial with classified design elements, a relatively short key length, and suspicions about a National Security Agency (NSA) backdoor. DES consequently came under intense academic scrutiny which motivated the modern understanding of block ciphers and their cryptanalysis. [7] A.1.2.1. DES History DES is now considered to be insecure for many applications. This is chiefly due to the 56-bit key size being too small; in January, 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes. There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are unfeasible to mount in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. In recent years, the cipher has been superseded by the Advanced Encryption Standard (AES). [8] In some documentation, a distinction is made between DES as a standard and DES the algorithm which is referred to as the DEA (the Data Encryption Algorithm). [9] [5] "Advanced Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Advanced_Encryption_Standard [6] "Advanced Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Advanced_Encryption_Standard [7] "Data Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Data_Encryption_Standard [8] "Data Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Data_Encryption_Standard [9] "Data Encryption Standard." Wikipedia. 14 November 2009 http://en.wikipedia.org/wiki/Data_Encryption_Standard | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/chap-security_guide-encryption_standards |
Federate with Identity Service | Federate with Identity Service Red Hat OpenStack Platform 16.2 Federate with Identity Service using Red Hat Single Sign-On OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/federate_with_identity_service/index |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/monitoring_openshift_data_foundation/making-open-source-more-inclusive |
Appendix C. Versioning information | Appendix C. Versioning information Documentation last updated on Thursday, March 14th, 2024. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/versioning-information |
22.2.4. Starting and Stopping the Server | 22.2.4. Starting and Stopping the Server On the server that is sharing directories via Samba, the smb service must be running. View the status of the Samba daemon with the following command: Start the daemon with the following command: Stop the daemon with the following command: To start the smb service at boot time, use the command: You can also use chkconfig , ntsysv , or the Services Configuration Tool to configure which services start at boot time. Refer to Chapter 19, Controlling Access to Services for details. Note To view active connections to the system, execute the command smbstatus . | [
"/sbin/service smb status",
"/sbin/service smb start",
"/sbin/service smb stop",
"/sbin/chkconfig --level 345 smb on"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Configuring_a_Samba_Server-Starting_and_Stopping_the_Server |
3. We Need Feedback! | 3. We Need Feedback! If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you! Please submit a report in Bugzilla: http://bugzilla.redhat.com/ against the product Red Hat Enterprise Linux 6 and the component doc-Global_File_System_2 . When submitting a bug report, be sure to mention the manual's identifier: If you have a suggestion for improving the documentation, try to be as specific as possible when describing it. If you have found an error, include the section number and some of the surrounding text so we can find it easily. | [
"rh-gfs2(EN)-6 (2017-3-8T15:15)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/sect-redhat-we_need_feedback |
Chapter 8. Understanding ingress | Chapter 8. Understanding ingress When you create a site that can be linked to, you need to enable ingress on that site. By default, ingress is enabled, however you can disable it or set it to use a specific ingress type. By default, the ingress type is set to: route if available (OpenShift) loadbalancer Other options include: none useful if you do not need to link to the current site. nodeport nginx-ingress-v1 contour-http-proxy You can set the ingress type using the CLI when creating the site skupper init --ingress <type> or by setting the type in your site YAML, for example to disable ingress: apiVersion: v1 kind: ConfigMap metadata: name: skupper-site data: name: my-site ingress: "none" If the default ingress is not suitable, an alternative is nginx-ingress-v1 . Nginx uses Server Name Indication (SNI) to identify connection targets, which eliminates the need for assigning separate IP addresses as required by loadbalancer . Note When using nginx-ingress-v1 you must enable SSL Passthrough as described in Ingress-Nginx Controller documentation . 8.1. CLI options For a full list of options, see the Skupper Kubernetes CLI reference and Skupper Podman CLI reference documentation. Warning When you create a site and set logging level to trace , you can inadvertently log sensitive information from HTTP headers. USD skupper init --router-logging trace By default, all skupper commands apply to the cluster you are logged into and the current namespace. The following skupper options allow you to override that behavior and apply to all commands: --namespace <namespace-name> Apply command to <namespace-name> . For example, if you are currently working on frontend namespace and want to initialize a site in the backend namespace: USD skupper init --namespace backend --kubeconfig <kubeconfig-path> Path to the kubeconfig file - This allows you run multiple sessions to a cluster from the same client. An alternative is to set the KUBECONFIG environment variable. --context <context-name> The kubeconfig file can contain defined contexts, and this option allows you to use those contexts. | [
"apiVersion: v1 kind: ConfigMap metadata: name: skupper-site data: name: my-site ingress: \"none\"",
"skupper init --router-logging trace",
"skupper init --namespace backend"
] | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/using_service_interconnect/understanding-ingress |
Chapter 7. Kafka breaking changes | Chapter 7. Kafka breaking changes This section describes any changes to Kafka that required a corresponding change to AMQ Streams to continue to work. 7.1. Using Kafka's example file connectors Kafka no longer includes the example file connectors FileStreamSourceConnector and FileStreamSinkConnector in its CLASSPATH and plugin.path by default. AMQ Streams has been updated so that you can still use these example connectors. The examples now have to be added to the plugin path like any connector. Two example connector configuration files are provided: examples/connect/kafka-connect-build.yaml provides a Kafka Connect build configuration, which you can deploy to build a new Kafka Connect image with the file connectors. examples/connect/source-connector.yaml provides the configuration required to deploy the file connectors as KafkaConnector resources. See the following: Deploying example KafkaConnector resources Extending Kafka Connect with connector plugins | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/release_notes_for_amq_streams_2.5_on_openshift/kafka-change-str |
Chapter 5. Creating and managing resources for bare-metal instances | Chapter 5. Creating and managing resources for bare-metal instances As a cloud operator you can create and manage resources for bare-metal workloads and enable your cloud users to create bare-metal instances. You can create the following resources for bare-metal workloads: Bare-metal instances Images for bare-metal instances Virtual network interfaces (VIFs) for bare-metal nodes Port groups You can perform the following resource management tasks: Manual node cleaning Attach a virtual network interface (VIF) to a bare-metal instance 5.1. Prerequisites The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic) . You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. The oc command line tool is installed on the workstation. 5.2. Launching bare-metal instances You can launch a bare-metal instance by using the OpenStack Client CLI. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create the bare-metal instance: Replace <network_uuid> with the unique identifier for the network that you created to use with the Bare Metal Provisioning service. Replace <image_uuid> with the unique identifier for the image that has the software profile that your instance requires. Check the status of the instance: Exit the openstackclient pod: 5.3. Images for launching bare-metal instances A Red Hat OpenStack Services on OpenShift (RHOSO) environment that includes the Bare Metal Provisioning service (ironic) requires two sets of images: Deploy images: The deploy images are the agent.ramdisk and agent.kernel images that the Bare Metal Provisioning agent ( ironic-python-agent ) requires to boot the RAM disk over the network and copy the user image to the disk. User images: The images the cloud user uses to provision their bare-metal instances. The user image consists of a kernel image, a ramdisk image, and a main image. The main image is either a root partition, or a whole-disk image: Whole-disk image: An image that contains the partition table and boot loader. Root partition image: Contains only the root partition of the operating system. Compatible whole-disk RHEL guest images should work without modification. To create your own custom disk image, see Creating RHEL KVM or RHOSP-compatible images in Creating and managing images . 5.4. Booting an ISO image directly for use as a RAM disk You can boot a bare-metal instance from a RAM disk or an ISO image if you want to boot an instance with PXE, iPXE, or Virtual Media, and use the instance memory for local storage. This is useful for advanced scientific and ephemeral workloads where writing an image to the local storage is not required or desired. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Specify ramdisk as the deploy interface for the bare-metal node that boots from an ISO image: Tip You can configure the deploy interface when you create the bare-metal node by adding --deploy-interface ramdisk to the openstack baremetal node create command. For information on how to create a bare-metal node, see Enrolling a bare-metal node manually . Update the bare-metal node to boot an ISO image: Replace <node_UUID> with the UUID of the bare-metal node that you want to boot from an ISO image. Replace <boot_iso_url> with the URL of the boot ISO file. You can specify the boot ISO file URL by using one of the following methods: HTTP or HTTPS URL File path URL Image service (glance) object UUID Deploy the bare-metal node as an ISO image: Exit the openstackclient pod: 5.5. Creating the virtual network interfaces (VIFs) for bare-metal instances Cloud users can attach their bare-metal instances to the network interfaces you create for the bare-metal workloads. You must create the virtual network interfaces (VIFs) for the cloud user to select for attachment. 5.5.1. Bare Metal Provisioning service virtual network interfaces (VIFs) The Bare Metal Provisioning service (ironic) uses the Networking service (neutron) to manage the attachment state of the virtual network interfaces (VIFs). A VIF is a Networking service port, referred to by the port ID, which is a UUID value. A VIF can be available across a limited number of physical networks, dependent upon the cloud's operating configuration and operating constraints. The Bare Metal Provisioning service can also attach the bare-metal instance to a separate provider network to improve the overall operational security. Each VIF must be attached to a port or port group, therefore the maximum number of VIFs is determined by the number of configured and available ports represented in the Bare Metal Provisioning service. The network interface is one of the driver interfaces that manages the network switching for bare-metal instances. The type of network interface you create influences the operation of your bare-metal workloads. The following network interfaces are available to use with the Bare Metal Provisioning service: noop : Used for standalone deployments, and does not perform any network switching. flat : Places all nodes into a single provider network that is pre-configured on the Networking service and physical equipment. Nodes remain physically connected to this network during their entire life cycle. The supplied VIF attachment record is updated with new DHCP records as needed. When using this network interface, the VIF needs to be created on the same network that the bare-metal node is physically attached to. neutron : Provides tenant-defined networking through the Networking service, separating tenant networks from each other and from the provisioning and cleaning provider networks. Nodes move between these networks during their life cycle. This interface requires Networking service support for the switches attached to the bare-metal instances so they can be programmed. This interface requires the ML2 plugin OVN mechanism driver or other SDN integrations to facilitate port configuration on the network. Use the neutron interface when your environment uses IPv6. 5.5.2. How the Bare Metal Provisioning service manages VIFs when provisioning a bare-metal node When provisioning, by default the Bare Metal Provisioning service (ironic) attempts to attach all PXE-enabled ports to the provisioning network. If you have neutron.add_all_ports enabled, then the Bare Metal Provisioning service attempts to bind all ports to the required service network beyond the Bare Metal Provisioning service ports with pxe_enabled set to True . After the bare-metal nodes are provisioned, and before the bare-metal nodes are moved to the ACTIVE provisioning state, the previously attached ports are unbound. The process for unbinding is dependent on the network interface: flat : All the requested VIFs with all binding configurations in all states are unbound. neutron : The VIFs requested by the cloud user are attached to the bare-metal node for the first time, because the VIFs that the Bare Metal Provisioning service created were being deleted during the provisioning process. The same flow and logic applies to the cleaning, service, and rescue processes. 5.5.3. Creating a virtual network interface (VIF) for bare-metal nodes Use the Networking service (neutron) to create the port that serves as the virtual network interface (VIF). If you are using the neutron network interface, then you must also create a physical connection to the underlying physical network by creating a Bare Metal Provisioning service (ironic) port with a binding profile. The binding profile is required by the Networking service's ML2 mechanism driver when a VIF is attached to a bare-metal instance. The binding profile includes the VNIC_BAREMETAL port type, the bare-metal node UUID, and local link connection information that identifies the tenant network that the ML2 mechanism driver must attach to the physical bare-metal port. The binding profile information is populated through the introspection process by using LLDP data that is broadcast from the switches, therefore the switches must have LLDP enabled. You need to manually set or update the binding profile when there is a physical networking change, for example, when a bare-metal port's cable has been moved to a different port on a switch, or the switch has been replaced. Note Decoding LLDP data is performed as a best effort action. Some switch vendors, or changes in switch vendor firmware might impact field decoding. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create the virtual network interface (VIF): If you are using the neutron network interface, then create a Bare Metal Provisioning service port with the binding profile information: Replace <switch_mac_address> with the MAC address or OpenFlow-based datapath_id of the switch. Replace <switch_hostname> with the name of the bare-metal node that hosts the switch. Replace <switch_port_for_connection> with the port ID on the switch, for example, Gig0/1 , or rep0-0 . Replace <phys_net> with the name of the physical network you want to associate with the bare-metal port. The Bare Metal Provisioning service uses the physical network to map the Networking service virtual ports to physical ports and port groups. If not set then any VIF is mapped to that port when there no bare-metal port with a suitable physical network assignment exists. Exit the openstackclient pod: 5.6. Configuring port groups in the Bare Metal Provisioning service Note Port group functionality for bare-metal nodes is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should be used only for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details . Port groups (bonds) provide a method to aggregate multiple network interfaces into a single "bonded" interface. Port group configuration always takes precedence over an individual port configuration. During interface attachment, port groups have a higher priority than the ports, so they are used first. Currently, it is not possible to specify preference for port or port group in an interface attachment request. If a port group is available, the interface attachment will use it. Port groups that do not have any ports are ignored. If a port group has a physical network, then all the ports in that port group must have the same physical network. The Bare Metal Provisioning service uses configdrive to support configuration of port groups in the instances. Note Bare Metal Provisioning service API version 1.26 supports port group configuration. To configure port groups in a bare metal deployment, you must configure the port groups on the switches manually. You must ensure that the mode and properties on the switch correspond to the mode and properties on the bare metal side as the naming can vary on the switch. Note You cannot use port groups for provisioning and cleaning if you need to boot a deployment using iPXE. With port group fallback, all the ports in a port group can fallback to individual switch ports when a connection fails. Based on whether a switch supports port group fallback or not, you can use the --support-standalone-ports and --unsupport-standalone-ports options. 5.6.1. Prerequisites The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic) . You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. The oc command line tool is installed on the workstation. 5.6.2. Configuring port groups in the Bare Metal Provisioning service Create a port group to aggregate multiple network interfaces into a single bonded interface. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a port group: Replace <node_uuid> with the UUID of the node that this port group belongs to. Replace <group_name> with the name for this port group. Optional: Replace <mac_address> with the MAC address for the port group. If you do not specify an address, the deployed instance port group address is the same as the Networking service port. If you do not attach the Networking service port, the port group configuration fails. Optional: Replace <mode> with mode of the port group. Specify if the group supports fallback to standalone ports. Note You must configure port groups manually in standalone mode either in the image or by generating the configdrive and adding it to the node's instance_info . Ensure that you have cloud-init version 0.7.7 or later for the port group configuration to work. Associate a port with a port group: During port creation: During port update: Boot an instance by providing an image that has cloud-init or supports bonding. To check if the port group is configured properly, run the following command: Here, X is a number that cloud-init generates automatically for each configured port group, starting with a 0 and incremented by one for each configured port group. Exit the openstackclient pod: 5.7. Cleaning nodes manually The Bare Metal Provisioning service (ironic) cleans nodes automatically when they are unprovisioned to prepare them for provisioning. You can perform manual cleaning on specific nodes as required. Node cleaning has two modes: Metadata only clean: Removes partitions from all disks on the node. The metadata only mode of cleaning is faster than a full clean, but less secure because it erases only partition tables. Use this mode only on trusted tenant environments. Full clean: Removes all data from all disks, using either ATA secure erase or by shredding. A full clean can take several hours to complete. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Check the current state of the node: Replace <node> with the name or UUID of the node to clean. If the node is not in the manageable state, then set it to manageable : Clean the node: Replace <node> with the name or UUID of the node to clean. Replace <clean_mode> with the type of cleaning to perform on the node: erase_devices : Performs a full clean. erase_devices_metadata : Performs a metadata only clean. Wait for the clean to complete, then check the status of the node: manageable : The clean was successful, and the node is ready to provision. clean failed : The clean was unsuccessful. Inspect the last_error field for the cause of failure. Exit the openstackclient pod: 5.8. Attaching a virtual network interface (VIF) to a bare-metal instance To attach a bare-metal instance to the bare-metal network interface, the cloud user can use the Compute service (nova) or the Bare Metal Provisioning service (ironic). Compute service: Cloud users use the openstack server add network command. For more information, see Attaching a network to an instance . Note === When using the Compute service you must explicitly declare the port when creating the instance. When the Compute service makes a request to the Bare Metal Provisioning service to create an instance, the Compute service attempts to record all the VIFs the user requested to be attached in the Bare Metal Provisioning service to generate the metadata. You cannot specify which physical port to attach a VIF to when using the Compute service.If you want to explicitly declare which port to map to, then instead use the Bare Metal Provisioning service to create the attachment. === Bare Metal Provisioning service: Cloud users use the openstack baremetal node vif attach command to attach a VIF to a bare-metal instance. For more information about virtual network interfaces (VIFs), see Bare Metal Provisioning service virtual network interfaces (VIFs) . The following procedure uses the Bare Metal Provisioning service to attach a bare-metal instance to a network. The Bare Metal Provisioning service creates the VIF attachment by using the UUID of the port you created with the Networking service . Procedure Access the remote shell for the OpenStackClient pod from your workstation: Retrieve the UUID of the bare-metal instance you want to attach the VIF to: Retrieve the UUID of the VIF you want to attach to your node: Optional: Retrieve the UUID of the bare-metal port you want to map the VIF to: Attach the VIF to your bare-metal instance: Optional: Replace <port_uuid> with the UUID of the bare-metal port to attach the VIF to. Replace <node> with the name or UUID of the bare-metal instance you want to attach the VIF to. Replace <vif_id> with the name or UUID of the VIF to attach to the bare-metal instance. Exit the openstackclient pod: 5.8.1. How the Bare Metal Provisioning service attaches the VIF to a bare-metal instance When a cloud user requests that a virtual network interface (VIF) is attached to their bare-metal instance by using the openstack baremetal node vif attach command without a declared port or port group preference, the Bare Metal Provisioning service (ironic) selects a suitable unattached port or port group by evaluating the following criteria in order: Ports or port groups do not have a physical network or have a physical network that matches one of the VIF's available physical networks. Prefer ports and port groups that have a physical network to ports and port groups that do not have a physical network. Prefer port groups to ports. Prefer ports with PXE enabled. When the Bare Metal Provisioning service attaches any VIF to a bare-metal instance it explicitly sets the MAC address for the physical port to which the VIF is bound. If a node is already in an ACTIVE state, then the Networking service (neutron) updates the VIF attachment. When the Bare Metal Provisioning service unbinds the VIF, it makes a request to the Networking service to reset the assigned MAC address to avoid conflicts with the Networking service's unique hardware MAC address requirement. 5.8.2. Attaching and detaching virtual network interfaces The Bare Metal Provisioning service has an API that you can use to manage the mapping between virtual network interfaces. For example, the interfaces in the Networking service (neutron) and your physical interfaces (NICs). You can configure these interfaces for each bare-metal node to set the virtual network interface (VIF) to physical network interface (PIF) mapping logic. Procedure Access the remote shell for the OpenStackClient pod from your workstation: List the VIF IDs that are connected to the bare-metal node: Replace <node> with the name or UUID of the bare-metal node. After the VIF is attached, the Bare Metal Provisioning service updates the virtual port in the Networking service with the MAC address of the physical port. Check this port address: Create a new port on the network where you created the bare-metal node: Remove the port from the bare-metal instance it was attached to: Check that the IP address no longer exists on the list: Check if there are VIFs attached to the node: Add the newly created port: Verify that the new IP address shows the new port: Check if the VIF ID is the UUID of the new port: Check if the Networking service port MAC address is updated and matches one of the Bare Metal Provisioning service ports: Reboot the bare-metal node so that it recognizes the new IP address: After you detach or attach interfaces, the bare-metal OS removes, adds, or modifies the network interfaces that have changed. When you replace a port, a DHCP request obtains the new IP address, but this might take some time because the old DHCP lease is still valid. To initiate these changes immediately, reboot the bare-metal node. | [
"oc rsh -n openstack openstackclient",
"openstack server create --nic net-id=<network_uuid> --flavor baremetal --image <image_uuid> myBareMetalInstance",
"openstack server list --name myBareMetalInstance",
"exit",
"oc rsh -n openstack openstackclient",
"openstack baremetal node set --deploy-interface ramdisk",
"openstack baremetal node set <node_UUID> --instance-info boot_iso=<boot_iso_url>",
"openstack baremetal node deploy <node_UUID>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack port create --network <network> <name>",
"openstack baremetal port create <physical_mac_address> --node <node_uuid> --local-link-connection switch_id=<switch_mac_address> --local-link-connection switch_info=<switch_hostname> --local-link-connection port_id=<switch_port_for_connection> --pxe-enabled true --physical-network <phys_net>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack baremetal port group create --node <node_uuid> --name <group_name> [--address <mac_address>] [--mode <mode>] --property miimon=100 --property xmit_hash_policy=\"layer2+3\" [--support-standalone-ports]",
"openstack baremetal port create --node <node_uuid> --address <mac_address> --port-group <group_name>",
"openstack baremetal port set <port_uuid> --port-group <group_uuid>",
"cat /proc/net/bonding/bondX",
"exit",
"oc rsh -n openstack openstackclient",
"openstack baremetal node show -f value -c provision_state <node>",
"openstack baremetal node manage <node>",
"openstack baremetal node clean <node> --clean-steps '[{\"interface\": \"deploy\", \"step\": \"<clean_mode>\"}]'",
"exit",
"oc rsh -n openstack openstackclient",
"openstack server list",
"openstack port list",
"openstack baremetal port list",
"openstack baremetal node vif attach [--port-uuid <port_uuid>] <node> <vif_id>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack baremetal node vif list <node> +--------------------------------------+ | ID | +--------------------------------------+ | 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 | +--------------------------------------+",
"openstack port show 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16 -c mac_address -c fixed_ips +-------------+-----------------------------------------------------------------------------+ | Field | Value | +-------------+-----------------------------------------------------------------------------+ | fixed_ips | ip_address='192.168.24.9', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' | | mac_address | 00:2d:28:2f:8d:95 | +-------------+-----------------------------------------------------------------------------+",
"openstack port create --network baremetal --fixed-ip ip-address=192.168.24.24 <port_name>",
"openstack server remove port <instance_name> 4475bc5a-6f6e-466d-bcb6-6c2dce0fba16",
"openstack server list",
"openstack baremetal node vif list <node> openstack port list",
"openstack server add port <instance_name> <port_name>",
"openstack server list",
"openstack baremetal node vif list <node> +--------------------------------------+ | ID | +--------------------------------------+ | 6181c089-7e33-4f1c-b8fe-2523ff431ffc | +--------------------------------------+",
"openstack port show 6181c089-7e33-4f1c-b8fe-2523ff431ffc -c mac_address -c fixed_ips +-------------+------------------------------------------------------------------------------+ | Field | Value | +-------------+------------------------------------------------------------------------------+ | fixed_ips | ip_address='192.168.24.24', subnet_id='1d11c677-5946-4733-87c3-23a9e06077aa' | | mac_address | 00:2d:28:2f:8d:95 | +-------------+------------------------------------------------------------------------------+",
"openstack server reboot overcloud-baremetal-0"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_bare_metal_provisioning_service/assembly_creating-and-managing-resources-for-bare-metal-instances |
Chapter 33. Installing an Identity Management client using an Ansible playbook | Chapter 33. Installing an Identity Management client using an Ansible playbook Learn more about how to configure a system as an Identity Management (IdM) client by using Ansible . Configuring a system as an IdM client enrolls it into an IdM domain and enables the system to use IdM services on IdM servers in the domain. The deployment is managed by the ipaclient Ansible role. By default, the role uses the autodiscovery mode for identifying the IdM servers, domain and other settings. The role can be modified to have the Ansible playbook use the settings specified, for example in the inventory file. Prerequisites You have installed the ansible-freeipa package on the Ansible control node. You are using Ansible version 2.15 or later. You understand the general Ansible and IdM concepts. 33.1. Setting the parameters of the inventory file for the autodiscovery client installation mode To install an Identity Management (IdM) client using an Ansible playbook, configure the target host parameters in an inventory file, for example inventory : The information about the host The authorization for the task The inventory file can be in one of many formats, depending on the inventory plugins you have. The INI-like format is one of Ansible's defaults and is used in the examples below. Note To use smart cards with the graphical user interface in RHEL, ensure that you include the ipaclient_mkhomedir variable in your Ansible playbook. Procedure Open your inventory file for editing. Specify the fully-qualified hostname (FQDN) of the host to become an IdM client. The fully qualified domain name must be a valid DNS name: Only numbers, alphabetic characters, and hyphens ( - ) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. No capital letters are allowed. If the SRV records are set properly in the IdM DNS zone, the script automatically discovers all the other required values. Example of a simple inventory hosts file with only the client FQDN defined Specify the credentials for enrolling the client. The following authentication methods are available: The password of a user authorized to enroll clients . This is the default option. Use the Ansible Vault to store the password, and reference the Vault file from the playbook file, for example install-client.yml , directly: Example playbook file using principal from inventory file and password from an Ansible Vault file Less securely, provide the credentials of admin using the ipaadmin_password option in the [ipaclients:vars] section of the inventory/hosts file. Alternatively, to specify a different authorized user, use the ipaadmin_principal option for the user name, and the ipaadmin_password option for the password. The inventory/hosts inventory file and the install-client.yml playbook file can then look as follows: Example inventory hosts file Example Playbook using principal and password from inventory file The client keytab from the enrollment if it is still available. This option is available if the system was previously enrolled as an Identity Management client. To use this authentication method, uncomment the #ipaclient_keytab option, specifying the path to the file storing the keytab, for example in the [ipaclient:vars] section of inventory/hosts . A random, one-time password (OTP) to be generated during the enrollment. To use this authentication method, use the ipaclient_use_otp=true option in your inventory file. For example, you can uncomment the ipaclient_use_otp=true option in the [ipaclients:vars] section of the inventory/hosts file. Note that with OTP you must also specify one of the following options: The password of a user authorized to enroll clients , for example by providing a value for ipaadmin_password in the [ipaclients:vars] section of the inventory/hosts file. The admin keytab , for example by providing a value for ipaadmin_keytab in the [ipaclients:vars] section of inventory/hosts . Optional: Specify the DNS resolver using the ipaclient_configure_dns_resolve and ipaclient_dns_servers options (if available) to simplify cluster deployments. This is especially useful if your IdM deployment is using integrated DNS: An inventory file snippet specifying a DNS resolver: Note The ipaclient_dns_servers list must contain only IP addresses. Host names are not allowed. Starting with RHEL 8.9, you can also specify the ipaclient_subid: true option to have subid ranges configured for IdM users on the IdM level. Additional resources /usr/share/ansible/roles/ipaclient/README.md Managing subID ranges manually 33.2. Setting the parameters of the inventory file when autodiscovery is not possible during client installation To install an Identity Management client using an Ansible playbook, configure the target host parameters in an inventory file, for example inventory/hosts : The information about the host, the IdM server and the IdM domain or the IdM realm The authorization for the task The inventory file can be in one of many formats, depending on the inventory plugins you have. The INI-like format is one of Ansible's defaults and is used in the examples below. Note To use smart cards with the graphical user interface in RHEL, ensure that you include the ipaclient_mkhomedir variable in your Ansible playbook. Procedure Specify the fully-qualified hostname (FQDN) of the host to become an IdM client. The fully qualified domain name must be a valid DNS name: Only numbers, alphabetic characters, and hyphens ( - ) are allowed. For example, underscores are not allowed and can cause DNS failures. The host name must be all lower-case. No capital letters are allowed. Specify other options in the relevant sections of the inventory/hosts file: The FQDN of the servers in the [ipaservers] section to indicate which IdM server the client will be enrolled with One of the two following options: The ipaclient_domain option in the [ipaclients:vars] section to indicate the DNS domain name of the IdM server the client will be enrolled with The ipaclient_realm option in the [ipaclients:vars] section to indicate the name of the Kerberos realm controlled by the IdM server Example of an inventory hosts file with the client FQDN, the server FQDN and the domain defined Specify the credentials for enrolling the client. The following authentication methods are available: The password of a user authorized to enroll clients . This is the default option. Use the Ansible Vault to store the password, and reference the Vault file from the playbook file, for example install-client.yml , directly: Example playbook file using principal from inventory file and password from an Ansible Vault file Less securely, the credentials of admin to be provided using the ipaadmin_password option in the [ipaclients:vars] section of the inventory/hosts file. Alternatively, to specify a different authorized user, use the ipaadmin_principal option for the user name, and the ipaadmin_password option for the password. The install-client.yml playbook file can then look as follows: Example inventory hosts file Example Playbook using principal and password from inventory file The client keytab from the enrollment if it is still available: This option is available if the system was previously enrolled as an Identity Management client. To use this authentication method, uncomment the ipaclient_keytab option, specifying the path to the file storing the keytab, for example in the [ipaclient:vars] section of inventory/hosts . A random, one-time password (OTP) to be generated during the enrollment. To use this authentication method, use the ipaclient_use_otp=true option in your inventory file. For example, you can uncomment the #ipaclient_use_otp=true option in the [ipaclients:vars] section of the inventory/hosts file. Note that with OTP you must also specify one of the following options: The password of a user authorized to enroll clients , for example by providing a value for ipaadmin_password in the [ipaclients:vars] section of the inventory/hosts file. The admin keytab , for example by providing a value for ipaadmin_keytab in the [ipaclients:vars] section of inventory/hosts . Starting with RHEL 8.9, you can also specify the ipaclient_subid: true option to have subid ranges configured for IdM users on the IdM level. Additional resources /usr/share/ansible/roles/ipaclient/README.md Managing subID ranges manually 33.3. Authorization options for IdM client enrollment using an Ansible playbook You can authorize IdM client enrollment by using any of the following methods: A random, one-time password (OTP) + administrator password A random, one-time password (OTP) + an admin keytab The client keytab from the enrollment The password of a user authorized to enroll a client ( admin ) stored in an inventory file The password of a user authorized to enroll a client ( admin ) stored in an Ansible vault It is possible to have the OTP generated by an IdM administrator before the IdM client installation. In that case, you do not need any credentials for the installation other than the OTP itself. The following are sample inventory files for these methods: Table 33.1. Sample inventory files Authorization option Inventory file A random, one-time password (OTP) + administrator password A random, one-time password (OTP) This scenario assumes that the OTP was already generated by an IdM admin before the installation. A random, one-time password (OTP) + an admin keytab The client keytab from the enrollment Password of an admin user stored in an inventory file Password of an admin user stored in an Ansible vault file If you are using the password of an admin user stored in an Ansible vault file, the corresponding playbook file must have an additional vars_files directive: Table 33.2. User password stored in an Ansible vault Inventory file Playbook file In all the other authorization scenarios described above, a basic playbook file could look as follows: Note As of RHEL 8.8, in the two OTP authorization scenarios described above, the requesting of the administrator's TGT by using the kinit command occurs on the first specified or discovered IdM server. Therefore, no additional modification of the Ansible control node is required. Before RHEL 8.8, the krb5-workstation package was required on the control node. 33.4. Deploying an IdM client using an Ansible playbook Complete this procedure to use an Ansible playbook to deploy an IdM client in your IdM environment. Prerequisites The managed node is a Red Hat Enterprise Linux 8 system with a static IP address and a working package manager. You have set the parameters of the IdM client deployment to correspond to your deployment scenario: Setting the parameters of the inventory file for the autodiscovery client installation mode Setting the parameters of the inventory file when autodiscovery is not possible during client installation Procedure Run the Ansible playbook: 33.5. Using the one-time password method in Ansible to install an IdM client You can generate a one-time password (OTP) for a new host in Identity Management (IdM) and use it to enroll a system into the IdM domain. This procedure describes how to use Ansible to install an IdM client after generating an OTP for it on another IdM host. This method of installing an IdM client is convenient if two system administrators with different privileges exist in your organisation: One that has the credentials of an IdM administrator. Another that has the required Ansible credentials, including root access to the host to become an IdM client. The IdM administrator performs the first part of the procedure in which the OTP password is generated. The Ansible administrator performs the remaining part of the procedure in which the OTP is used to install an IdM client. Prerequisites You have the IdM admin credentials or at least the Host Enrollment privilege and a permission to add DNS records in IdM. You have configured a user escalation method on the Ansible managed node to allow you to install an IdM client. If your Ansible control node is running on RHEL 8.7 or earlier, you must be able to install packages on your Ansible control node. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package on the Ansible controller. You have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The managed node is a Red Hat Enterprise Linux 8 system with a static IP address and a working package manager. Procedure SSH to an IdM host as an IdM user with a role that has the Host Enrollment privilege and a permission to add DNS records: Generate an OTP for the new client: The --ip-address= <your_host_ip_address> option adds the host to IdM DNS with the specified IP address. Exit the IdM host: On the ansible controller, update the inventory file to include the random password: If your ansible controller is running RHEL 8.7 or earlier, install the kinit utility provided by the krb5-workstation package: Run the playbook to install the client: 33.6. Testing an Identity Management client after Ansible installation The command line (CLI) informs you that the ansible-playbook command was successful, but you can also do your own test. To test that the Identity Management client can obtain information about users defined on the server, check that you are able to resolve a user defined on the server. For example, to check the default admin user: To test that authentication works correctly, su - as another already existing IdM user: 33.7. Uninstalling an IdM client using an Ansible playbook Complete this procedure to use an Ansible playbook to uninstall your host as an IdM client. Prerequisites IdM administrator credentials. The managed node is a Red Hat Enterprise Linux 8 system with a static IP address. Procedure Run the Ansible playbook with the instructions to uninstall the client, for example uninstall-client.yml : Important The uninstallation of the client only removes the basic IdM configuration from the host but leaves the configuration files on the host in case you decide to re-install the client. In addition, the uninstallation has the following limitations: It does not remove the client host entry from the IdM LDAP server. The uninstallation only unenrolls the host. It does not remove any services residing on the client from IdM. It does not remove the DNS entries for the client from the IdM server. It does not remove the old principals for keytabs other than /etc/krb5.keytab . Note that the uninstallation does remove all certificates that were issued for the host by the IdM CA. Additional resources Uninstalling an IdM client | [
"[ipaclients] client.idm.example.com [...]",
"- name: Playbook to configure IPA clients with username/password hosts: ipaclients become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaclient state: present",
"[...] [ipaclients:vars] ipaadmin_principal=my_admin ipaadmin_password=Secret123",
"- name: Playbook to unconfigure IPA clients hosts: ipaclients become: true roles: - role: ipaclient state: true",
"[...] [ipaclients:vars] ipaadmin_password: \"{{ ipaadmin_password }}\" ipaclient_domain=idm.example.com ipaclient_configure_dns_resolver=true ipaclient_dns_servers=192.168.100.1",
"[ipaclients] client.idm.example.com [ipaservers] server.idm.example.com [ipaclients:vars] ipaclient_domain=idm.example.com [...]",
"- name: Playbook to configure IPA clients with username/password hosts: ipaclients become: true vars_files: - playbook_sensitive_data.yml roles: - role: ipaclient state: present",
"[...] [ipaclients:vars] ipaadmin_principal=my_admin ipaadmin_password=Secret123",
"- name: Playbook to unconfigure IPA clients hosts: ipaclients become: true roles: - role: ipaclient state: true",
"[ipaclients:vars] ipaadmin_password=Secret123 ipaclient_use_otp=true",
"[ipaclients:vars] ipaclient_otp=<W5YpARl=7M.>",
"[ipaclients:vars] ipaadmin_keytab=/root/admin.keytab ipaclient_use_otp=true",
"[ipaclients:vars] ipaclient_keytab=/root/krb5.keytab",
"[ipaclients:vars] ipaadmin_password=Secret123",
"[ipaclients:vars] [...]",
"[ipaclients:vars] [...]",
"- name: Playbook to configure IPA clients hosts: ipaclients become: true vars_files: - ansible_vault_file.yml roles: - role: ipaclient state: present",
"- name: Playbook to configure IPA clients hosts: ipaclients become: true roles: - role: ipaclient state: true",
"ansible-playbook -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/install-client.yml",
"ssh [email protected]",
"[admin@server ~]USD ipa host-add client.idm.example.com --ip-address=172.25.250.11 --random -------------------------------------------------- Added host \"client.idm.example.com\" -------------------------------------------------- Host name: client.idm.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.idm.example.com",
"exit logout Connection to server.idm.example.com closed.",
"[...] [ipaclients] client.idm.example.com [ipaclients:vars] ipaclient_domain=idm.example.com ipaclient_otp=W5YpARl=7M.n [...]",
"sudo dnf install krb5-workstation",
"ansible-playbook -i inventory install-client.yml",
"[user@client1 ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client1 ~]USD su - idm_user Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0 [idm_user@client1 ~]USD",
"ansible-playbook -v -i ~/MyPlaybooks/inventory ~/MyPlaybooks/uninstall-client.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/installing-an-identity-management-client-using-an-ansible-playbook_installing-identity-management |
5.4. The IdM Web UI | 5.4. The IdM Web UI The Identity Management web UI is a web application for IdM administration. It has most of the capabilities of the ipa command-line utility. Therefore, the users can choose whether they want to manage IdM from the UI or from the command line. Note Management operations available to the logged-in user depend on the user's access rights. For the admin user and other users with administrative privileges, all management tasks are available. For regular users, only a limited set of operations related to their own user account is available. 5.4.1. Supported Web Browsers Identity Management supports the following browsers for connecting to the web UI: Mozilla Firefox 38 and later Google Chrome 46 and later 5.4.2. Accessing the Web UI and Authenticating The web UI can be accessed both from IdM server and client machines, as well as from machines outside of the IdM domain. However, to access the UI from a non-domain machine, you must first configure the non-IdM system to be able to connect to the IdM Kerberos domain; see Section 5.4.4, "Configuring an External System for Kerberos Authentication to the Web UI" for more details. 5.4.2.1. Accessing the Web UI To access the web UI, type the IdM server URL into the browser address bar: This opens the IdM web UI login screen in your browser. Figure 5.1. Web UI Login Screen 5.4.2.2. Available Login Methods The user can authenticate to the web UI in the following ways: With an active Kerberos ticket If the user has a valid TGT obtained with the kinit utility, clicking Login automatically authenticates the user. Note that the browser must be configured properly to support Kerberos authentication. For information on obtaining a Kerberos TGT, see Section 5.2, "Logging into IdM Using Kerberos" . For information on configuring the browser, see Section 5.4.3, "Configuring the Browser for Kerberos Authentication" . By providing user name and password To authenticate using a user name and password, enter the user name and password on the web UI login screen. IdM also supports one-time password (OTP) authentication. For more information, see Section 22.3, "One-Time Passwords" . With a smart card For more information, see Section 23.6, "Authenticating to the Identity Management Web UI with a Smart Card" . After the user authenticates successfully, the IdM management window opens. Figure 5.2. The IdM Web UI Layout 5.4.2.3. Web UI Session Length When a user logged in to the IdM web UI using a user name and password, the session length is the same as the expiration period of the Kerberos ticket obtained during the login operation. 5.4.2.4. Authenticating to the IdM Web UI as an AD User Active Directory (AD) users can log in to the IdM web UI with their user name and password. In the web UI, AD users can perform only a limited set of operations related to their own user account, unlike IdM users who can perform management operations related to their administrative privileges. To enable web UI login for AD users, the IdM administrator must define an ID override for each AD user in the Default Trust View. For example: For details on ID views in AD, see Using ID Views in Active Directory Environments in the Windows Integration Guide . 5.4.3. Configuring the Browser for Kerberos Authentication To enable authentication with Kerberos credentials, you must configure your browser to support Kerberos negotiation for accessing the IdM domain. Note that if your browser is not configured properly for Kerberos authentication, an error message appears after clicking Login on the IdM web UI login screen. Figure 5.3. Kerberos Authentication Error You can configure your browser for Kerberos authentication in three ways: Automatically from the IdM web UI. This option is only available for Firefox. See the section called "Automatic Firefox Configuration in the Web UI" for details. Automatically from the command line during the IdM client installation. This option is only available for Firefox. See the section called "Automatic Firefox Configuration from the Command Line" for details. Manually in the Firefox configuration settings. This option is available for all supported browsers. See the section called "Manual Browser Configuration" for details. Note The System-Level Authentication Guide includes a Troubleshooting Firefox Kerberos Configuration . If Kerberos authentication is not working as expected, see this troubleshooting guide for more advice. Automatic Firefox Configuration in the Web UI To automatically configure Firefox from the IdM web UI: Click the link for browser configuration on the web UI login screen. Figure 5.4. Link to Configuring the Browser in the Web UI Choose the link for Firefox configuration to open the Firefox configuration page. Figure 5.5. Link to the Firefox Configuration Page Follow the steps on the Firefox configuration page. Automatic Firefox Configuration from the Command Line Firefox can be configured from the command line during IdM client installation. To do this, use the --configure-firefox option when installing the IdM client with the ipa-client-install utility: The --configure-firefox option creates a global configuration file with default Firefox settings that enable Kerberos for single sign-on (SSO). Manual Browser Configuration To manually configure your browser: Click the link for browser configuration on the web UI login screen. Figure 5.6. Link to Configuring the Browser in the Web UI Choose the link for manual browser configuration. Figure 5.7. Link to the Manual Configuration Page Look for the instructions to configure your browser and follow the steps. 5.4.4. Configuring an External System for Kerberos Authentication to the Web UI To enable Kerberos authentication to the web UI from a system that is not a member of the IdM domain, you must define an IdM-specific Kerberos configuration file on the external machine. Enabling Kerberos authentication on external systems is especially useful when your infrastructure includes multiple realms or overlapping domains. To create the Kerberos configuration file: Copy the /etc/krb5.conf file from the IdM server to the external machine. For example: Warning Do not overwrite the existing krb5.conf file on the external machine. On the external machine, set the terminal session to use the copied IdM Kerberos configuration file: Configure the browser on the external machine as described in Section 5.4.3, "Configuring the Browser for Kerberos Authentication" . Users on the external system can now use the kinit utility to authenticate against the IdM server domain. 5.4.5. Proxy Servers and Port Forwarding in the Web UI Using proxy servers to access the web UI does not require any additional configuration in IdM. Port forwarding is not supported with the IdM server. However, because it is possible to use proxy servers, an operation similar to port forwarding can be configured using proxy forwarding with OpenSSH and the SOCKS option. This can be configured using the -D option of the ssh utility; for more information on using -D , see the ssh (1) man page. | [
"https://server.example.com",
"[admin@server ~]USD ipa idoverrideuser-add 'Default Trust View' [email protected]",
"ipa-client-install --configure-firefox",
"scp /etc/krb5.conf root@ externalmachine.example.com :/etc/krb5_ipa.conf",
"export KRB5_CONFIG=/etc/krb5_ipa.conf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/using-the-ui |
Chapter 44. Technology Previews | Chapter 44. Technology Previews containernetworking-plugins now available The Container Network Interface (CNI) project consists of a specification and libraries for writing plug-ins for configuring network interfaces in Linux containers, along with a number of supported plug-ins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. The containernetworking-plugins package is now available as a technology preview. It is a dependency of the podman tool, and will remain as a technology preview until podman becomes fully supported. LiveFS now available Previously, layering packages on Atomic Host required a reboot for the software to be available on the system. The LiveFS feature removes the need to reboot, making layered packages available instantly. See Package Layering for more information and usage instructions. Identity Management in a container Identity Management (IdM) in a container is provided as a Technology Preview. To install this new image, use the atomic install --hostname <IPA_server_hostname> rhel7/ipa-server command. In addition to --hostname , The atomic install command supports the following keywords for specifying the style of the container to be run: net-host - share the host's network to the container publish - publish all ports to the host's interfaces cap-add - add a capability to the container You can also use the atomic install rhel7/ipa-server help command to list these keywords and their usage. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/technology_previews |
Chapter 25. Load balancing on RHOSP | Chapter 25. Load balancing on RHOSP 25.1. Using the Octavia OVN load balancer provider driver with Kuryr SDN If your OpenShift Container Platform cluster uses Kuryr and was installed on a Red Hat OpenStack Platform (RHOSP) 13 cloud that was later upgraded to RHOSP 16, you can configure it to use the Octavia OVN provider driver. Important Kuryr replaces existing load balancers after you change provider drivers. This process results in some downtime. Prerequisites Install the RHOSP CLI, openstack . Install the OpenShift Container Platform CLI, oc . Verify that the Octavia OVN driver on RHOSP is enabled. Tip To view a list of available Octavia drivers, on a command line, enter openstack loadbalancer provider list . The ovn driver is displayed in the command's output. Procedure To change from the Octavia Amphora provider driver to Octavia OVN: Open the kuryr-config ConfigMap. On a command line, enter: USD oc -n openshift-kuryr edit cm kuryr-config In the ConfigMap, delete the line that contains kuryr-octavia-provider: default . For example: ... kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1 ... 1 Delete this line. The cluster will regenerate it with ovn as the value. Wait for the Cluster Network Operator to detect the modification and to redeploy the kuryr-controller and kuryr-cni pods. This process might take several minutes. Verify that the kuryr-config ConfigMap annotation is present with ovn as its value. On a command line, enter: USD oc -n openshift-kuryr edit cm kuryr-config The ovn provider value is displayed in the output: ... kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn ... Verify that RHOSP recreated its load balancers. On a command line, enter: USD openstack loadbalancer list | grep amphora A single Amphora load balancer is displayed. For example: a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora Search for ovn load balancers by entering: USD openstack loadbalancer list | grep ovn The remaining load balancers of the ovn type are displayed. For example: 2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn 25.2. Scaling clusters for application traffic by using Octavia OpenShift Container Platform clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create. If your cluster uses Kuryr, the Cluster Network Operator created an internal Octavia load balancer at deployment. You can use this load balancer for application network scaling. If your cluster does not use Kuryr, you must create your own Octavia load balancer to use it for application network scaling. 25.2.1. Scaling clusters by using Octavia If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it. Prerequisites Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure From a command line, create an Octavia load balancer that uses the Amphora driver: USD openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet> You can use a name of your choice instead of API_OCP_CLUSTER . After the load balancer becomes active, create listeners: USD openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER Note To view the status of the load balancer, enter openstack loadbalancer list . Create a pool that uses the round robin algorithm and has session persistence enabled: USD openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS To ensure that control plane machines are available, create a health monitor: USD openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443 Add the control plane machines as members of the load balancer pool: USD for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done Optional: To reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP Your cluster now uses Octavia for load balancing. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 25.2.2. Scaling clusters that use Kuryr by using Octavia If your cluster uses Kuryr, associate the API floating IP address of your cluster with the pre-existing Octavia load balancer. Prerequisites Your OpenShift Container Platform cluster uses Kuryr. Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment. Procedure Optional: From a command line, to reuse the cluster API floating IP address, unset it: USD openstack floating ip unset USDAPI_FIP Add either the unset API_FIP or a new address to the created load balancer VIP: USD openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP Your cluster now uses Octavia for load balancing. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 25.3. Scaling for ingress traffic by using RHOSP Octavia You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr. Prerequisites Your OpenShift Container Platform cluster uses Kuryr. Octavia is available on your RHOSP deployment. Procedure To copy the current internal router service, on a command line, enter: USD oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml In the file external_router.yaml , change the values of metadata.name and spec.type to LoadBalancer . Example router file apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2 1 Ensure that this value is descriptive, like router-external-default . 2 Ensure that this value is LoadBalancer . Note You can delete timestamps and other information that is irrelevant to load balancing. From a command line, create a service from the external_router.yaml file: USD oc apply -f external_router.yaml Verify that the external IP address of the service is the same as the one that is associated with the load balancer: On a command line, retrieve the external IP address of the service: USD oc -n openshift-ingress get svc Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h Retrieve the IP address of the load balancer: USD openstack loadbalancer list | grep router-external Example output | 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia | Verify that the addresses you retrieved in the steps are associated with each other in the floating IP list: USD openstack floating ip list | grep 172.30.235.33 Example output | e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c | You can now use the value of EXTERNAL-IP as the new Ingress address. Note If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM). You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck. 25.4. Configuring an external load balancer You can configure an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) to use an external load balancer in place of the default load balancer. Prerequisites On your load balancer, TCP over ports 6443, 443, and 80 must be available to any users of your system. Load balance the API port, 6443, between each of the control plane nodes. Load balance the application ports, 443 and 80, between all of the compute nodes. On your load balancer, port 22623, which is used to serve ignition startup configurations to nodes, is not exposed outside of the cluster. Your load balancer must be able to access every machine in your cluster. Methods to allow this access include: Attaching the load balancer to the cluster's machine subnet. Attaching floating IP addresses to machines that use the load balancer. Procedure Enable access to the cluster from your load balancer on ports 6443, 443, and 80. As an example, note this HAProxy configuration: A section of a sample HAProxy configuration ... listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listen my-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listen my-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check Add records to your DNS server for the cluster API and apps over the load balancer. For example: <load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain> From a command line, use curl to verify that the external load balancer and DNS configuration are operational. Verify that the cluster API is accessible: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that cluster applications are accessible: Note You can also verify application accessibility by opening the OpenShift Container Platform console in a web browser. USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, you receive an HTTP response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private | [
"oc -n openshift-kuryr edit cm kuryr-config",
"kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1",
"oc -n openshift-kuryr edit cm kuryr-config",
"kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn",
"openstack loadbalancer list | grep amphora",
"a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora",
"openstack loadbalancer list | grep ovn",
"2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn",
"openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>",
"openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER",
"openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS",
"openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443",
"for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP",
"openstack floating ip unset USDAPI_FIP",
"openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP",
"oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml",
"apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2",
"oc apply -f external_router.yaml",
"oc -n openshift-ingress get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h",
"openstack loadbalancer list | grep router-external",
"| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |",
"openstack floating ip list | grep 172.30.235.33",
"| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |",
"listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listen my-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listen my-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check",
"<load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain>",
"curl https://<loadbalancer_ip_address>:6443/version --insecure",
"{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }",
"curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure",
"HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/load-balancing-openstack |
Chapter 4. Hosting Virtual Machine Images on Red Hat Gluster Storage volumes | Chapter 4. Hosting Virtual Machine Images on Red Hat Gluster Storage volumes Red Hat Gluster Storage provides a POSIX-compatible file system to store virtual machine images in Red Hat Gluster Storage volumes. This chapter describes how to configure volumes using the command line interface, and how to prepare Red Hat Gluster Storage servers for virtualization using Red Hat Virtualization Manager. 4.1. Configuring Volumes Using the Command Line Interface Red Hat recommends configuring volumes before starting them. For information on creating volumes, see Red Hat Gluster Storage Volumes in the Red Hat Gluster Storage Administration Guide : https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-red_hat_storage_volumes. . Procedure 4.1. Configuring Volumes Using the Command Line Interface Configure the rhgs-random-io tuned profile Install the tuned tuning daemon and configure Red Hat Gluster Storage servers to use the rhgs-random-io profile: For more information on available tuning profiles, refer to the tuned-adm man page, or see the Red Hat Gluster Storage 3.5 Administration Guide : https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/ . Note When you upgrade, a new virt file may be created in /var/lib/glusterd/groups/virt.rpmnew . Apply the new virt file to the existing volumes by renaming the virt.rpmnew file to virt . Assign volumes to virt group Assign volumes that store virtual machine images to the virt volume group to apply the settings in the virt profile. This has the same effect as the Optimize for Virt Store option in the management console. See Appendix A, The virt group profile for more information about this configuration. Important Volumes in the virt group must only be used for storing machine images, and must only be accessed using the native FUSE client. (Recommended) Configure improved self-heal performance Run the following command to improve the performance of volume self-heal operations. Allow KVM and VDSM brick access Set the brick permissions for vdsm and kvm . If you do not set the required brick permissions, creation of virtual machines fails. Set the user and group permissions using the following commands: If you are using QEMU/KVM as a hypervisor, set the user and group permissions using the following commands: See Also: Section 5.4, "Optimizing Red Hat Gluster Storage Volumes for Virtual Machine Images" | [
"yum install tuned tuned-adm profile rhgs-random-io",
"gluster volume set VOLNAME group virt",
"gluster volume heal volname cluster.granular-entry-heal enable",
"gluster volume set VOLNAME storage.owner-uid 36 gluster volume set VOLNAME storage.owner-gid 36",
"gluster volume set VOLNAME storage.owner-uid 107 gluster volume set VOLNAME storage.owner-gid 107"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/chap-Hosting_Virtual_Machine_Images_on_Red_Hat_Storage_volumes |
Chapter 4. Configuring client applications for connecting to a Kafka cluster | Chapter 4. Configuring client applications for connecting to a Kafka cluster To connect to a Kafka cluster, a client application must be configured with a minimum set of properties that identify the brokers and enable a connection. Additionally, you need to add a serializer/deserializer mechanism to convert messages into or out of the byte array format used by Kafka. When developing a consumer client, you begin by adding an initial connection to your Kafka cluster, which is used to discover all available brokers. When you have established a connection, you can begin consuming messages from Kafka topics or producing messages to them. Although not required, a unique client ID is recommended so that you can identity your clients in logs and metrics collection. You can configure the properties in a properties file. Using a properties file means you can modify the configuration without recompiling the code. For example, you can load the properties in a Java client using the following code: Loading configuration properties into a client Properties props = new Properties(); try (InputStream propStream = Files.newInputStream(Paths.get(filename))) { props.load(propStream); } You can also use add the properties directly to the code in a configuration object. For example, you can use the setProperty() method for a Java client application. Adding properties directly is a useful option when you only have a small number of properties to configure. 4.1. Basic producer client configuration When you develop a producer client, configure the following: A connection to your Kafka cluster A serializer to transform message keys into bytes for the Kafka broker A serializer to transform message values into bytes for the Kafka broker You might also add a compression type in case you want to send and store compressed messages. Basic producer client configuration properties client.id = my-producer-id 1 bootstrap.servers = my-cluster-kafka-bootstrap:9092 2 key.serializer = org.apache.kafka.common.serialization.StringSerializer 3 value.serializer = org.apache.kafka.common.serialization.StringSerializer 4 1 The logical name for the client. 2 Bootstrap address for the client to be able to make an initial connection to the Kafka cluster. 3 Serializer to transform message keys into bytes before being sent to the Kafka broker. 4 Serializer to transform message values into bytes before being sent to the Kafka broker. Adding producer client configuration directly to the code Properties props = new Properties(); props.setProperty(ProducerConfig.CLIENT_ID_CONFIG, "my-producer-id"); props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "my-cluster-kafka-bootstrap:9092"); props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); KafkaProducer<String, String> producer = new KafkaProducer<>(properties); The KafkaProducer specifies string key and value types for the messages it sends. The serializers used must be able to convert the key and values from the specified type into bytes before sending them to Kafka. 4.2. Basic consumer client configuration When you develop a consumer client, configure the following: A connection to your Kafka cluster A deserializer to transform the bytes fetched from the Kafka broker into message keys that can be understood by the client application A deserializer to transform the bytes fetched from the Kafka broker into message values that can be understood by the client application Typically, you also add a consumer group ID to associate the consumer with a consumer group. A consumer group is a logical entity for distributing the processing of a large data stream from one or more topics to parallel consumers. Consumers are grouped using a group.id , allowing messages to be spread across the members. In a given consumer group, each topic partition is read by a single consumer. A single consumer can handle many partitions. For maximum parallelism, create one consumer for each partition. If there are more consumers than partitions, some consumers remain idle, ready to take over in case of failure. Basic consumer client configuration properties client.id = my-consumer-id 1 group.id = my-group-id 2 bootstrap.servers = my-cluster-kafka-bootstrap:9092 3 key.deserializer = org.apache.kafka.common.serialization.StringDeserializer 4 value.deserializer = org.apache.kafka.common.serialization.StringDeserializer 5 1 The logical name for the client. 2 A group ID for the consumer to be able to join a specific consumer group. 3 Bootstrap address for the client to be able to make an initial connection to the Kafka cluster. 4 Deserializer to transform the bytes fetched from the Kafka broker into message keys. 5 Deserializer to transform the bytes fetched from the Kafka broker into message values. Adding consumer client configuration directly to the code Properties props = new Properties(); props.setProperty(ConsumerConfig.CLIENT_ID_CONFIG, "my-consumer-id"); props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "my-group-id"); props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "my-cluster-kafka-bootstrap:9092"); props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties); The KafkaConsumer specifies string key and value types for the messages it receives. The serializers used must be able to convert the bytes received from Kafka into the specified types. Note Each consumer group must have a unique group.id . If you restart a consumer with the same group.id , it resumes consuming messages from where it left off before it was stopped. | [
"Properties props = new Properties(); try (InputStream propStream = Files.newInputStream(Paths.get(filename))) { props.load(propStream); }",
"client.id = my-producer-id 1 bootstrap.servers = my-cluster-kafka-bootstrap:9092 2 key.serializer = org.apache.kafka.common.serialization.StringSerializer 3 value.serializer = org.apache.kafka.common.serialization.StringSerializer 4",
"Properties props = new Properties(); props.setProperty(ProducerConfig.CLIENT_ID_CONFIG, \"my-producer-id\"); props.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"my-cluster-kafka-bootstrap:9092\"); props.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); props.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); KafkaProducer<String, String> producer = new KafkaProducer<>(properties);",
"client.id = my-consumer-id 1 group.id = my-group-id 2 bootstrap.servers = my-cluster-kafka-bootstrap:9092 3 key.deserializer = org.apache.kafka.common.serialization.StringDeserializer 4 value.deserializer = org.apache.kafka.common.serialization.StringDeserializer 5",
"Properties props = new Properties(); props.setProperty(ConsumerConfig.CLIENT_ID_CONFIG, \"my-consumer-id\"); props.setProperty(ConsumerConfig.GROUP_ID_CONFIG, \"my-group-id\"); props.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, \"my-cluster-kafka-bootstrap:9092\"); props.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); props.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/developing_kafka_client_applications/con-client-dev-config-basics-str |
Configuring load balancing as a service | Configuring load balancing as a service Red Hat OpenStack Platform 17.1 Managing network traffic across the data plane using the Load-balancing service (octavia) OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_load_balancing_as_a_service/index |
9.15.5. Recommended Partitioning Scheme | 9.15.5. Recommended Partitioning Scheme 9.15.5.1. x86, AMD64, and Intel 64 systems We recommend that you create the following partitions for x86, AMD64, and Intel 64 systems : A swap partition A /boot partition A / partition A home partition A /boot/efi partition (EFI System Partition) - only on systems with UEFI firmware A swap partition (at least 256 MB) - Swap partitions support virtual memory: data is written to a swap partition when there is not enough RAM to store the data your system is processing. In years past, the recommended amount of swap space increased linearly with the amount of RAM in the system. Modern systems often include hundreds of gigabytes of RAM, however. As a consequence, recommended swap space is considered a function of system memory workload, not system memory. The following table provides the recommended size of a swap partition depending on the amount of RAM in your system and whether you want sufficient memory for your system to hibernate. The recommended swap partition size is established automatically during installation. To allow for hibernation, however, you will need to edit the swap space in the custom partitioning stage. Important Recommendations in the table below are especially important on systems with low memory (1 GB and less). Failure to allocate sufficient swap space on these systems may cause issues such as instability or even render the installed system unbootable. Table 9.2. Recommended System Swap Space Amount of RAM in the system Recommended swap space Recommended swap space if allowing for hibernation ⩽ 2GB 2 times the amount of RAM 3 times the amount of RAM > 2GB - 8GB Equal to the amount of RAM 2 times the amount of RAM > 8GB - 64GB At least 4 GB 1.5 times the amount of RAM > 64GB At least 4 GB Hibernation not recommended At the border between each range listed above (for example, a system with 2GB, 8GB, or 64GB of system RAM), discretion can be exercised with regard to chosen swap space and hibernation support. If your system resources allow for it, increasing the swap space may lead to better performance. Note that distributing swap space over multiple storage devices - particularly on systems with fast drives, controllers and interfaces - also improves swap space performance. Note Swap space size recommendations issued for Red Hat Enterprise Linux 6.0, 6.1, and 6.2 differed from the current recommendations, which were first issued with the release of Red Hat Enterprise Linux 6.3 in June 2012 and did not account for hibernation space. Automatic installations of these earlier versions of Red Hat Enterprise Linux 6 still generate a swap space in line with these superseded recommendations. However, manually selecting a swap space size in line with the newer recommendations issued for Red Hat Enterprise Linux 6.3 is advisable for optimal performance. A /boot/ partition (250 MB) The partition mounted on /boot/ contains the operating system kernel (which allows your system to boot Red Hat Enterprise Linux), along with files used during the bootstrap process. For most users, a 250 MB boot partition is sufficient. Important The /boot and / (root) partition in Red Hat Enterprise Linux 6.9 can only use the ext2, ext3, and ext4 (recommended) file systems. You cannot use any other file system for this partition, such as Btrfs, XFS, or VFAT. Other partitions, such as /home , can use any supported file system, including Btrfs and XFS (if available). See the following article on the Red Hat Customer Portal for additional information: https://access.redhat.com/solutions/667273 . Warning Note that normally the /boot partition is created automatically by the installer. However, if the / (root) partition is larger than 2 TB and (U)EFI is used for booting, you need to create a separate /boot partition that is smaller than 2 TB to boot the machine successfully. Note If your hard drive is more than 1024 cylinders (and your system was manufactured more than two years ago), you may need to create a /boot/ partition if you want the / (root) partition to use all of the remaining space on your hard drive. Note If you have a RAID card, be aware that some BIOS types do not support booting from the RAID card. In cases such as these, the /boot/ partition must be created on a partition outside of the RAID array, such as on a separate hard drive. A root partition (3.0 GB - 5.0 GB) - this is where " / " (the root directory) is located. In this setup, all files (except those stored in /boot ) are on the root partition. A 3.0 GB partition allows you to install a minimal installation, while a 5.0 GB root partition lets you perform a full installation, choosing all package groups. Important The /boot and / (root) partition in Red Hat Enterprise Linux 6.9 can only use the ext2, ext3, and ext4 (recommended) file systems. You cannot use any other file system for this partition, such as Btrfs, XFS, or VFAT. Other partitions, such as /home , can use any supported file system, including Btrfs and XFS (if available). See the following article on the Red Hat Customer Portal for additional information: https://access.redhat.com/solutions/667273 . Important The / (or root) partition is the top of the directory structure. The /root directory (sometimes pronounced "slash-root") is the home directory of the user account for system administration. A home partition (at least 100 MB) To store user data separately from system data, create a dedicated partition within a volume group for the /home directory. This will enable you to upgrade or reinstall Red Hat Enterprise Linux without erasing user data files. Many systems have more partitions than the minimum listed above. Choose partitions based on your particular system needs. Refer to Section 9.15.5.1.1, "Advice on Partitions" for more information. If you create many partitions instead of one large / partition, upgrades become easier. Refer to the description of the Edit option in Section 9.15, " Creating a Custom Layout or Modifying the Default Layout " for more information. The following table summarizes minimum partition sizes for the partitions containing the listed directories. You do not have to make a separate partition for each of these directories. For instance, if the partition containing /foo must be at least 500 MB, and you do not make a separate /foo partition, then the / (root) partition must be at least 500 MB. Table 9.3. Minimum partition sizes Directory Minimum size / 250 MB /usr 250 MB /tmp 50 MB /var 384 MB /home 100 MB /boot 250 MB Note Leave Excess Capacity Unallocated, and only assign storage capacity to those partitions you require immediately. You may allocate free space at any time, to meet needs as they occur. To learn about a more flexible method for storage management, refer to Appendix D, Understanding LVM . If you are not sure how best to configure the partitions for your computer, accept the default partition layout. 9.15.5.1.1. Advice on Partitions Optimal partition setup depends on the usage for the Linux system in question. The following tips may help you decide how to allocate your disk space. Consider encrypting any partitions that might contain sensitive data. Encryption prevents unauthorized people from accessing the data on the partitions, even if they have access to the physical storage device. In most cases, you should at least encrypt the /home partition. Each kernel installed on your system requires approximately 30 MB on the /boot partition. Unless you plan to install a great many kernels, the default partition size of 250 MB for /boot should suffice. Important The /boot and / (root) partition in Red Hat Enterprise Linux 6.9 can only use the ext2, ext3, and ext4 (recommended) file systems. You cannot use any other file system for this partition, such as Btrfs, XFS, or VFAT. Other partitions, such as /home , can use any supported file system, including Btrfs and XFS (if available). See the following article on the Red Hat Customer Portal for additional information: https://access.redhat.com/solutions/667273 . The /var directory holds content for a number of applications, including the Apache web server. It also is used to store downloaded update packages on a temporary basis. Ensure that the partition containing the /var directory has enough space to download pending updates and hold your other content. Warning The PackageKit update software downloads updated packages to /var/cache/yum/ by default. If you partition the system manually, and create a separate /var/ partition, be sure to create the partition large enough (3.0 GB or more) to download package updates. The /usr directory holds the majority of software content on a Red Hat Enterprise Linux system. For an installation of the default set of software, allocate at least 4 GB of space. If you are a software developer or plan to use your Red Hat Enterprise Linux system to learn software development skills, you may want to at least double this allocation. Consider leaving a portion of the space in an LVM volume group unallocated. This unallocated space gives you flexibility if your space requirements change but you do not wish to remove data from other partitions to reallocate storage. a If you separate subdirectories into partitions, you can retain content in those subdirectories if you decide to install a new version of Red Hat Enterprise Linux over your current system. For instance, if you intend to run a MySQL databasge in /var/lib/mysql , make a separate partition for that directory in case you need to reinstall later. UEFI systems should contain a 50-150MB /boot/efi partition with an EFI System Partition filesystem. The following table is a possible partition setup for a system with a single, new 80 GB hard disk and 1 GB of RAM. Note that approximately 10 GB of the volume group is unallocated to allow for future growth. Note This setup is an example, and is not optimal for all use cases. Example 9.1. Example partition setup Table 9.4. Example partition setup Partition Size and type /boot 250 MB ext3 partition swap 2 GB swap LVM physical volume Remaining space, as one LVM volume group The physical volume is assigned to the default volume group and divided into the following logical volumes: Table 9.5. Example partition setup: LVM physical volume Partition Size and type / 13 GB ext4 /var 4 GB ext4 /home 50 GB ext4 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-diskpartrecommend-x86 |
Chapter 5. About the multicluster engine for Kubernetes Operator | Chapter 5. About the multicluster engine for Kubernetes Operator One of the challenges of scaling Kubernetes environments is managing the lifecycle of a growing fleet. To meet that challenge, you can use the multicluster engine Operator. The operator delivers full lifecycle capabilities for managed OpenShift Container Platform clusters and partial lifecycle management for other Kubernetes distributions. It is available in two ways: As a standalone operator that you install as part of your OpenShift Container Platform or OpenShift Kubernetes Engine subscription As part of Red Hat Advanced Cluster Management for Kubernetes 5.1. Cluster management with multicluster engine on OpenShift Container Platform When you enable multicluster engine on OpenShift Container Platform, you gain the following capabilities: Hosted control planes , which is a feature that is based on the HyperShift project. With a centralized hosted control plane, you can operate OpenShift Container Platform clusters in a hyperscale manner. Hive, which provisions self-managed OpenShift Container Platform clusters to the hub and completes the initial configurations for those clusters. klusterlet agent, which registers managed clusters to the hub. Infrastructure Operator, which manages the deployment of the Assisted Service to orchestrate on-premise bare metal and vSphere installations of OpenShift Container Platform, such as single-node OpenShift on bare metal. The Infrastructure Operator includes GitOps Zero Touch Provisioning (ZTP) , which fully automates cluster creation on bare metal and vSphere provisioning with GitOps workflows to manage deployments and configuration changes. Open cluster management, which provides resources to manage Kubernetes clusters. The multicluster engine is included with your OpenShift Container Platform support subscription and is delivered separately from the core payload. To start to use multicluster engine, you deploy the OpenShift Container Platform cluster and then install the operator. For more information, see Installing and upgrading multicluster engine operator . 5.2. Cluster management with Red Hat Advanced Cluster Management If you need cluster management capabilities beyond what OpenShift Container Platform with multicluster engine can provide, consider Red Hat Advanced Cluster Management. The multicluster engine is an integral part of Red Hat Advanced Cluster Management and is enabled by default. 5.3. Additional resources For the complete documentation for multicluster engine, see Cluster lifecycle with multicluster engine documentation , which is part of the product documentation for Red Hat Advanced Cluster Management. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/architecture/about-the-multicluster-engine-for-kubernetes-operator |
Subsets and Splits