title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
32.2. Anaconda Rescue Mode | 32.2. Anaconda Rescue Mode The Anaconda installation program's rescue mode is a minimal Linux environment that can be booted from the Red Hat Enterprise Linux 7 DVD or other boot media. It contains command-line utilities for repairing a wide variety of issues. This rescue mode can be accessed from the Troubleshooting submenu of the boot menu. In this mode, you can mount file systems as read-only or even to not mount them at all, blacklist or add a driver provided on a driver disc, install or upgrade system packages, or manage partitions. Note Anaconda rescue mode is different from rescue mode (an equivalent to single-user mode ) and emergency mode , which are provided as parts of the systemd system and service manager. For more information about these modes, see Red Hat Enterprise Linux 7 System Administrator's Guide . To boot into Anaconda rescue mode, you must be able to boot the system using one Red Hat Enterprise Linux boot media, such as a minimal boot disc or USB drive, or a full installation DVD. For detailed information about booting the system using media provided by Red Hat, see the appropriate chapters: Chapter 7, Booting the Installation on 64-bit AMD, Intel, and ARM systems for 64-bit AMD, Intel, and ARM systems Chapter 12, Booting the Installation on IBM Power Systems for IBM Power Systems servers Chapter 16, Booting the Installation on IBM Z for IBM Z Important Advanced storage, such as iSCSI or zFCP devices, must be configured either using dracut boot options (such as rd.zfcp= or root=iscsi: options ), or in the CMS configuration file on IBM Z. It is not possible to configure these storage devices interactively after booting into rescue mode. For information about dracut boot options, see the dracut.cmdline(7) man page. For information about the CMS configuration file, see Chapter 21, Parameter and Configuration Files on IBM Z . Procedure 32.1. Booting into Anaconda Rescue Mode Boot the system from either minimal boot media, or a full installation DVD or USB drive, and wait for the boot menu to appear. From the boot menu, either select the Rescue a Red Hat Enterprise Linux system option from the Troubleshooting submenu, or append the inst.rescue option to the boot command line. To enter the boot command line, press the Tab key on BIOS-based systems or the e key on the UEFI-based systems. If your system requires a third-party driver provided on a driver disc to boot, append the inst.dd= driver_name to the boot command line: For more information on using a driver disc at boot time, see Section 6.3.3, "Manual Driver Update" for AMD64 and Intel 64 systems or Section 11.2.3, "Manual Driver Update" for IBM Power Systems servers. If a driver that is part of the Red Hat Enterprise Linux 7 distribution prevents the system from booting, append the modprobe.blacklist= option to the boot command line: For more information about blacklisting drivers, see Section 6.3.4, "Blacklisting a Driver" . When ready, press Enter (BIOS-based systems) or Ctrl + X (UEFI-based systems) to boot the modified option. Then wait until the following message is displayed: If you select Continue , it attempts to mount your file system under the directory /mnt/sysimage/ . If it fails to mount a partition, you will be notified. If you select Read-Only , it attempts to mount your file system under the directory /mnt/sysimage/ , but in read-only mode. If you select Skip , your file system is not mounted. Choose Skip if you think your file system is corrupted. Once you have your system in rescue mode, a prompt appears on VC (virtual console) 1 and VC 2 (use the Ctrl + Alt + F1 key combination to access VC 1 and Ctrl + Alt + F2 to access VC 2): Even if your file system is mounted, the default root partition while in Anaconda rescue mode is a temporary root partition, not the root partition of the file system used during normal user mode ( multi-user.target or graphical.target ). If you selected to mount your file system and it mounted successfully, you can change the root partition of the Anaconda rescue mode environment to the root partition of your file system by executing the following command: This is useful if you need to run commands, such as rpm , that require your root partition to be mounted as / . To exit the chroot environment, type exit to return to the prompt. If you selected Skip , you can still try to mount a partition or LVM2 logical volume manually inside Anaconda rescue mode by creating a directory, such as /directory/ , and typing the following command: In the above command, /directory/ is a directory that you have created and /dev/mapper/VolGroup00-LogVol02 is the LVM2 logical volume you want to mount. If the partition is a different type than XFS, replace the xfs string with the correct type (such as ext4 ). If you do not know the names of all physical partitions, use the following command to list them: If you do not know the names of all LVM2 physical volumes, volume groups, or logical volumes, use the pvdisplay , vgdisplay or lvdisplay commands, respectively. From the prompt, you can run many useful commands, such as: ssh , scp , and ping if the network is started For details, see the Red Hat Enterprise Linux 7 System Administrator's Guide . dump and restore for users with tape drives For details, see the RHEL Backup and Restore Assistant . parted and fdisk for managing partitions For details, see the Red Hat Enterprise Linux 7 Storage Administration Guide . yum for installing or upgrading software For details, see the Red Hat Enterprise Linux 7 Administrator's Guide 32.2.1. Capturing an sosreport The sosreport command-line utility collects configuration and diagnostic information, such as the running kernel version, loaded modules, and system and service configuration files, from the system. The utility output is stored in a tar archive in the /var/tmp/ directory. The sosreport utility is useful for analyzing the system errors and can make troubleshooting easier. The following procedure describes how to capture an sosreport output in Anaconda rescue mode: Procedure 32.2. Using sosreport in Anaconda Rescue Mode Follow steps in Procedure 32.1, "Booting into Anaconda Rescue Mode" to boot into Anaconda rescue mode. Ensure that you mount the installed system / (root) partition in read-write mode. Change the root directory to the /mnt/sysimage/ directory: Execute sosreport to generate an archive with system configuration and diagnostic information: Important When running, sosreport will prompt you to enter your name and case number that you get when you contact Red Hat Support service and open a new support case. Use only letters and numbers because adding any of the following characters or spaces could render the report unusable: Optional . If you want to transfer the generated archive to a new location using the network, it is necessary to have a network interface configured. In case you use the dynamic IP addressing, there are no other steps required. However, when using the static addressing, enter the following command to assign an IP address (for example 10.13.153.64/23 ) to a network interface (for example dev eth0 ): See the Red Hat Enterprise Linux 7 Networking Guide for additional information about static addressing. Exit the chroot environment: Store the generated archive in a new location, from where it can be easily accessible: For transferring the archive through the network, use the scp utility: See the references below for further information: For general information about sosreport , see What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? . For information about using sosreport within Anaconda rescue mode, see How to generate sosreport from the rescue environment . For information about generating an sosreport to a different location than /tmp , see How do I make sosreport write to an alternative location? . For information about collecting an sosreport manually, see Sosreport fails. What data should I provide in its place? . 32.2.2. Reinstalling the Boot Loader In some cases, the GRUB2 boot loader can mistakenly be deleted, corrupted, or replaced by other operating systems. The following steps detail the process on how GRUB is reinstalled on the master boot record: Procedure 32.3. Reinstalling the GRUB2 Boot Loader Follow instructions in Procedure 32.1, "Booting into Anaconda Rescue Mode" to boot into Anaconda rescue mode. Ensure that you mount the installed system's / (root) partition in read-write mode. Change the root partition: Use the following command to reinstall the GRUB2 boot loader, where install_device is the boot device (typically, /dev/sda): Reboot the system. 32.2.3. Using RPM to Add, Remove, or Replace a Driver Missing or malfunctioning drivers can cause problems when booting the system. Anaconda rescue mode provides an environment in which you can add, remove, or replace a driver even when the system fails to boot. Wherever possible, we recommend that you use the RPM package manager to remove malfunctioning drivers or to add updated or missing drivers. Note When you install a driver from a driver disc, the driver disc updates all initramfs images on the system to use this driver. If a problem with a driver prevents a system from booting, you cannot rely on booting the system from another initramfs image. Procedure 32.4. Using RPM to Remove a Driver Boot the system into Anaconda rescue mode. Follow the instructions in Procedure 32.1, "Booting into Anaconda Rescue Mode" . Ensure that you mount the installed system in read-write mode. Change the root directory to /mnt/sysimage/ : Use the rpm -e command to remove the driver package. For example, to remove the xorg-x11-drv-wacom driver package, run: Exit the chroot environment: If you cannot remove a malfunctioning driver for some reason, you can instead blacklist the driver so that it does not load at boot time. See Section 6.3.4, "Blacklisting a Driver" and Chapter 23, Boot Options for more information about blacklisting drivers. Installing a driver is a similar process but the RPM package must be available on the system: Procedure 32.5. Installing a Driver from an RPM package Boot the system into Anaconda rescue mode. Follow the instructions in Procedure 32.1, "Booting into Anaconda Rescue Mode" . Do not choose to mount the installed system as read only. Make the RPM package that contains the driver available. For example, mount a CD or USB flash drive and copy the RPM package to a location of your choice under /mnt/sysimage/ , for example: /mnt/sysimage/root/drivers/ Change the root directory to /mnt/sysimage/ : Use the rpm -ivh command to install the driver package. For example, to install the xorg-x11-drv-wacom driver package from /root/drivers/ , run: Note The /root/drivers/ directory in this chroot environment is the /mnt/sysimage/root/drivers/ directory in the original rescue environment. Exit the chroot environment: When you have finished removing and installing drivers, reboot the system. | [
"inst.rescue inst.dd= driver_name",
"inst.rescue modprobe.blacklist= driver_name",
"The rescue environment will now attempt to find your Linux installation and mount it under the /mnt/sysimage/ directory. You can then make any changes required to your system. If you want to proceed with this step choose 'Continue'. You can also choose to mount your file systems read-only instead of read-write by choosing 'Read-only'. If for some reason this process fails you can choose 'Skip' and this step will be skipped and you will go directly to a command line.",
"sh-4.2#",
"sh-4.2# chroot /mnt/sysimage",
"sh-4.2# mount -t xfs /dev/mapper/VolGroup00-LogVol02 /directory",
"sh-4.2# fdisk -l",
"sh-4.2# chroot /mnt/sysimage/",
"sh-4.2# sosreport",
"% & { } \\ < > > * ? / USD ~ ' \" : @ + ` | =",
"bash-4.2# ip addr add 10.13.153.64/23 dev eth0",
"sh-4.2# exit",
"sh-4.2# cp /mnt/sysimage/var/tmp/ sosreport new_location",
"sh-4.2# scp /mnt/sysimage/var/tmp/ sosreport username@hostname:sosreport",
"sh-4.2# chroot /mnt/sysimage/",
"sh-4.2# /sbin/grub2-install install_device",
"sh-4.2# chroot /mnt/sysimage/",
"sh-4.2# rpm -e xorg-x11-drv-wacom",
"sh-4.2# exit",
"sh-4.2# chroot /mnt/sysimage/",
"sh-4.2# rpm -\\u00adivh /root/drivers/xorg-x11-drv-wacom-0.23.0-6.el7.x86_64.rpm",
"sh-4.2# exit"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/sect-rescue-mode |
2.3. Migration | 2.3. Migration Migration describes the process of moving a guest virtual machine from one host to another. This is possible because the virtual machines are running in a virtualized environment instead of directly on the hardware. There are two ways to migrate a virtual machine: live and offline. Migration Types Offline migration An offline migration suspends the guest virtual machine, and then moves an image of the virtual machine's memory to the destination host. The virtual machine is then resumed on the destination host and the memory used by the virtual machine on the source host is freed. Live migration Live migration is the process of migrating an active virtual machine from one physical host to another. Note that this is not possible between all Red Hat Enterprise Linux releases. Consult the Virtualization Deployment and Administration Guide for details. 2.3.1. Benefits of Migrating Virtual Machines Migration is useful for: Load balancing When a host machine is overloaded, one or more of its virtual machines could be migrated to other hosts using live migration. Similarly, machines that are not running and tend to overload can be migrated using offline migration. Upgrading or making changes to the host When the need arises to upgrade, add, or remove hardware devices on a host, virtual machines can be safely relocated to other hosts. This means that guests do not experience any downtime due to changes that are made to hosts. Energy saving Virtual machines can be redistributed to other hosts and the unloaded host systems can be powered off to save energy and cut costs in low usage periods. Geographic migration Virtual machines can be moved to other physical locations for lower latency or for other reasons. When the migration process moves a virtual machine's memory, the disk volume associated with the virtual machine is also migrated. This process is performed using live block migration. Note For more information on migration, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 2.3.2. Virtualized to Virtualized Migration (V2V) As a special type of migration, Red Hat Enterprise Linux 7 provides tools for converting virtual machines from other types of hypervisors to KVM. The virt-v2v tool converts and imports virtual machines from Xen, other versions of KVM, and VMware ESX. Note For more information on V2V, see the V2V Knowledgebase articles . In addition, Red Hat Enterprise Linux 7.3 and later support physical-to-virtual (P2V) conversion using the virt-p2v tool. For details, see the P2V Knowledgebase article . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_getting_started_guide/sec-migration |
Chapter 53. Deprecated Functionality in Red Hat Enterprise Linux 7 | Chapter 53. Deprecated Functionality in Red Hat Enterprise Linux 7 Python 2 has been deprecated Python 2 will be replaced with Python 3 in the Red Hat Enterprise Linux (RHEL) major release. See the Conservative Python 3 Porting Guide for information on how to migrate large code bases to Python 3 . Note that Python 3 is available to RHEL customers, and supported on RHEL, as a part of Red Hat Software Collections . LVM libraries and LVM Python bindings have been deprecated The lvm2app library and LVM Python bindings, which are provided by the lvm2-python-libs package, have been deprecated. Red Hat recommends the following solutions instead: The LVM D-Bus API in combination with the lvm2-dbusd service. This requires using Python version 3. The LVM command-line utilities with JSON formatting; this formatting has been available since the lvm2 package version 2.02.158. Mirrored mirror log has been deprecated in LVM The mirrored mirror log feature of mirrored LVM volumes has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support creating or activating LVM volumes with a mirrored mirror log. The recommended replacements are: RAID1 LVM volumes. The main advantage of RAID1 volumes is their ability to work even in degraded mode and to recover after a transient failure. For information on converting mirrored volumes to RAID1, see the Converting a Mirrored LVM Device to a RAID1 Device section in the LVM Administration guide. Disk mirror log. To convert a mirrored mirror log to disk mirror log, use the following command: lvconvert --mirrorlog disk my_vg/my_lv . Deprecated packages related to Identity Management and security The following packages have been deprecated and will not be included in a future major release of Red Hat Enterprise Linux: Deprecated packages Proposed replacement package or product authconfig authselect pam_pkcs11 sssd [a] pam_krb5 sssd openldap-servers Depending on the use case, migrate to Identity Management included in Red Hat Enterprise Linux or to Red Hat Directory Server. [b] mod_auth_kerb mod_auth_gssapi python-kerberos python-krbV python-gssapi python-requests-kerberos python-requests-gssapi hesiod No replacement available. mod_nss mod_ssl mod_revocator No replacement available. [a] System Security Services Daemon (SSSD) contains enhanced smart card functionality. [b] Red Hat Directory Server requires a valid Directory Server subscription. For details, see also What is the support status of the LDAP-server shipped with Red Hat Enterprise Linux? in Red Hat Knowledgebase. Note In Red Hat Enterprise Linux 7.5, the following packages were added to the table above: mod_auth_kerb python-kerberos , python-krbV python-requests-kerberos hesiod mod_nss mod_revocator Support for earlier IdM servers and for IdM replicas at domain level 0 will be limited Red Hat does not plan to support using Identity Management (IdM) servers running Red Hat Enterprise Linux (RHEL) 7.3 and earlier with IdM clients of the major release of RHEL. If you plan to introduce client systems running on the major version of RHEL into a deployment that is currently managed by IdM servers running on RHEL 7.3 or earlier, be aware that you will need to upgrade the servers, moving them to RHEL 7.4 or later. In the major release of RHEL, only domain level 1 replicas will be supported. Before introducing IdM replicas running on the major version of RHEL into an existing deployment, be aware that you will need to upgrade all IdM servers to RHEL 7.4 or later, and change the domain level to 1. Consider planning the upgrade in advance if your deployment will be affected. Bug-fix only support for the nss-pam-ldapd and NIS packages in the major release of Red Hat Enterprise Linux The nss-pam-ldapd packages and packages related to the NIS server will be released in the future major release of Red Hat Enterprise Linux but will receive a limited scope of support. Red Hat will accept bug reports but no new requests for enhancements. Customers are advised to migrate to the following replacement solutions: Affected packages Proposed replacement package or product nss-pam-ldapd sssd ypserv ypbind portmap yp-tools Identity Management in Red Hat Enterprise Linux Use the Go Toolset instead of golang The golang package has been updated to version 1.9 with Red Hat Enterprise Linux 7.5. The golang package, available in the Optional channel, will be removed from a future minor release of Red Hat Enterprise Linux 7. Developers are encouraged to use the Go Toolset instead. mesa-private-llvm will be replaced with llvm-private The mesa-private-llvm package, which contains the LLVM-based runtime support for Mesa , will be replaced in a future minor release of Red Hat Enterprise Linux 7 with the llvm-private package. libdbi and libdbi-drivers have been deprecated The libdbi and libdbi-drivers packages will not be included in the Red Hat Enterprise Linux (RHEL) major release. Ansible deprecated in the Extras channel Ansible and its dependencies will no longer be updated through the Extras channel. Instead, the Red Hat Ansible Engine product has been made available to Red Hat Enterprise Linux subscriptions and will provide access to the official Ansible Engine channel. Customers who have previously installed Ansible and its dependencies from the Extras channel are advised to enable and update from the Ansible Engine channel, or uninstall the packages as future errata will not be provided from the Extras channel. Ansible was previously provided in Extras (for AMD64 and Intel 64 architectures, and IBM POWER, little endian) as a runtime dependency of, and limited in support to, the Red Hat Enterprise Linux (RHEL) System Roles. Ansible Engine is available today for AMD64 and Intel 64 architectures, with IBM POWER, little endian availability coming soon. Note that Ansible in the Extras channel was not a part of the Red Hat Enterprise Linux FIPS validation process. The following packages have been deprecated from the Extras channel: ansible(-doc) libtomcrypt libtommath(-devel) python2-crypto python2-jmespath python-httplib2 python-paramiko(-doc) python-passlib sshpass For more information and guidance, see the Knowledgebase article at https://access.redhat.com/articles/3359651 . Note that Red Hat Enterprise Linux System Roles, available as a Technology Preview, continue to be distributed though the Extras channel. Although Red Hat Enterprise Linux System Roles no longer depend on the ansible package, installing ansible from the Ansible Engine repository is still needed to run playbooks which use Red Hat Enterprise Linux System Roles. signtool has been deprecated The signtool tool from the nss packages, which uses insecure signature algorithms, has been deprecated and will not be included in a future minor release of Red Hat Enterprise Linux. TLS compression support has been removed from nss To prevent security risks, such as the CRIME attack, support for TLS compression in the NSS library has been removed for all TLS versions. This change preserves the API compatibility. Public web CAs are no longer trusted for code signing by default The Mozilla CA certificate trust list distributed with Red Hat Enterprise Linux 7.5 no longer trusts any public web CAs for code signing. As a consequence, any software that uses the related flags, such as NSS or OpenSSL , no longer trusts these CAs for code signing by default. The software continues to fully support code signing trust. Additionally, it is still possible to configure CA certificates as trusted for code signing using system configuration. Sendmail has been deprecated Sendmail has been deprecated in Red Hat Enterprise Linux 7. Customers are advised to use Postfix , which is configured as the default Mail Transfer Agent (MTA). dmraid has been deprecated Since Red Hat Enterprise Linux 7.5, the dmraid packages have been deprecated. It will stay available in Red Hat Enterprise Linux 7 releases but a future major release will no longer support legacy hybrid combined hardware and software RAID host bus adapter (HBA). Automatic loading of DCCP modules through socket layer is now disabled by default For security reasons, automatic loading of the Datagram Congestion Control Protocol (DCCP) kernel modules through socket layer is now disabled by default. This ensures that userspace applications can not maliciously load any modules. All DCCP related modules can still be loaded manually through the modprobe program. The /etc/modprobe.d/dccp-blacklist.conf configuration file for blacklisting the DCCP modules is included in the kernel package. Entries included there can be cleared by editing or removing this file to restore the behavior. Note that any re-installation of the same kernel package or of a different version does not override manual changes. If the file is manually edited or removed, these changes persist across package installations. rsyslog-libdbi has been deprecated The rsyslog-libdbi sub-package, which contains one of the less used rsyslog module, has been deprecated and will not be included in a future major release of Red Hat Enterprise Linux. Removing unused or rarely used modules helps users to conveniently find a database output to use. The inputname option of the rsyslog imudp module has been deprecated The inputname option of the imudp module for the rsyslog service has been deprecated. Use the name option instead. SMBv1 is no longer installed with Microsoft Windows 10 and 2016 (updates 1709 and later) Microsoft announced that the Server Message Block version 1 (SMBv1) protocol will no longer be installed with the latest versions of Microsoft Windows and Microsoft Windows Server. Microsoft also recommends users to disable SMBv1 on earlier versions of these products. This update impacts Red Hat customers who operate their systems in a mixed Linux and Windows environment. Red Hat Enterprise Linux 7.1 and earlier support only the SMBv1 version of the protocol. Support for SMBv2 was introduced in Red Hat Enterprise Linux 7.2. For details on how this change affects Red Hat customers, see SMBv1 no longer installed with latest Microsoft Windows 10 and 2016 update (version 1709) in Red Hat Knowledgebase. FedFS has been deprecated Federated File System (FedFS) has been deprecated because the upstream FedFS project is no longer being actively maintained. Red Hat recommends migrating FedFS installations to use autofs , which provides more flexible functionality. Btrfs has been deprecated The Btrfs file system has been in Technology Preview state since the initial release of Red Hat Enterprise Linux 6. Red Hat will not be moving Btrfs to a fully supported feature and it will be removed in a future major release of Red Hat Enterprise Linux. The Btrfs file system did receive numerous updates from the upstream in Red Hat Enterprise Linux 7.4 and will remain available in the Red Hat Enterprise Linux 7 series. However, this is the last planned update to this feature. tcp_wrappers deprecated The tcp_wrappers package has been deprecated. tcp_wrappers provides a library and a small daemon program that can monitor and filter incoming requests for audit , cyrus-imap , dovecot , nfs-utils , openssh , openldap , proftpd , sendmail , stunnel , syslog-ng , vsftpd , and various other network services. nautilus-open-terminal replaced with gnome-terminal-nautilus Since Red Hat Enterprise Linux 7.3, the nautilus-open-terminal package has been deprecated and replaced with the gnome-terminal-nautilus package. This package provides a Nautilus extension that adds the Open in Terminal option to the right-click context menu in Nautilus. nautilus-open-terminal is replaced by gnome-terminal-nautilus during the system upgrade. sslwrap() removed from Python The sslwrap() function has been removed from Python 2.7 . After the 466 Python Enhancement Proposal was implemented, using this function resulted in a segmentation fault. The removal is consistent with upstream. Red Hat recommends using the ssl.SSLContext class and the ssl.SSLContext.wrap_socket() function instead. Most applications can simply use the ssl.create_default_context() function, which creates a context with secure default settings. The default context uses the system's default trust store, too. Symbols from libraries linked as dependencies no longer resolved by ld Previously, the ld linker resolved any symbols present in any linked library, even if some libraries were linked only implicitly as dependencies of other libraries. This allowed developers to use symbols from the implicitly linked libraries in application code and omit explicitly specifying these libraries for linking. For security reasons, ld has been changed to not resolve references to symbols in libraries linked implicitly as dependencies. As a result, linking with ld fails when application code attempts to use symbols from libraries not declared for linking and linked only implicitly as dependencies. To use symbols from libraries linked as dependencies, developers must explicitly link against these libraries as well. To restore the behavior of ld , use the -copy-dt-needed-entries command-line option. (BZ# 1292230 ) Windows guest virtual machine support limited As of Red Hat Enterprise Linux 7, Windows guest virtual machines are supported only under specific subscription programs, such as Advanced Mission Critical (AMC). libnetlink is deprecated The libnetlink library contained in the iproute-devel package has been deprecated. The user should use the libnl and libmnl libraries instead. S3 and S4 power management states for KVM have been deprecated Native KVM support for the S3 (suspend to RAM) and S4 (suspend to disk) power management states has been discontinued. This feature was previously available as a Technology Preview. The Certificate Server plug-in udnPwdDirAuth is discontinued The udnPwdDirAuth authentication plug-in for the Red Hat Certificate Server was removed in Red Hat Enterprise Linux 7.3. Profiles using the plug-in are no longer supported. Certificates created with a profile using the udnPwdDirAuth plug-in are still valid if they have been approved. Red Hat Access plug-in for IdM is discontinued The Red Hat Access plug-in for Identity Management (IdM) was removed in Red Hat Enterprise Linux 7.3. During the update, the redhat-access-plugin-ipa package is automatically uninstalled. Features previously provided by the plug-in, such as Knowledgebase access and support case engagement, are still available through the Red Hat Customer Portal. Red Hat recommends to explore alternatives, such as the redhat-support-tool tool. The Ipsilon identity provider service for federated single sign-on The ipsilon packages were introduced as Technology Preview in Red Hat Enterprise Linux 7.2. Ipsilon links authentication providers and applications or utilities to allow for single sign-on (SSO). Red Hat does not plan to upgrade Ipsilon from Technology Preview to a fully supported feature. The ipsilon packages will be removed from Red Hat Enterprise Linux in a future minor release. Red Hat has released Red Hat Single Sign-On as a web SSO solution based on the Keycloak community project. Red Hat Single Sign-On provides greater capabilities than Ipsilon and is designated as the standard web SSO solution across the Red Hat product portfolio. Several rsyslog options deprecated The rsyslog utility version in Red Hat Enterprise Linux 7.4 has deprecated a large number of options. These options no longer have any effect and cause a warning to be displayed. The functionality previously provided by the options -c , -u , -q , -x , -A , -Q , -4 , and -6 can be achieved using the rsyslog configuration. There is no replacement for the functionality previously provided by the options -l and -s Deprecated symbols from the memkind library The following symbols from the memkind library have been deprecated: memkind_finalize() memkind_get_num_kind() memkind_get_kind_by_partition() memkind_get_kind_by_name() memkind_partition_mmap() memkind_get_size() MEMKIND_ERROR_MEMALIGN MEMKIND_ERROR_MALLCTL MEMKIND_ERROR_GETCPU MEMKIND_ERROR_PMTT MEMKIND_ERROR_TIEDISTANCE MEMKIND_ERROR_ALIGNMENT MEMKIND_ERROR_MALLOCX MEMKIND_ERROR_REPNAME MEMKIND_ERROR_PTHREAD MEMKIND_ERROR_BADPOLICY MEMKIND_ERROR_REPPOLICY Options of Sockets API Extensions for SCTP (RFC 6458) deprecated The options SCTP_SNDRCV , SCTP_EXTRCV and SCTP_DEFAULT_SEND_PARAM of Sockets API Extensions for the Stream Control Transmission Protocol have been deprecated per the RFC 6458 specification. New options SCTP_SNDINFO , SCTP_NXTINFO , SCTP_NXTINFO and SCTP_DEFAULT_SNDINFO have been implemented as a replacement for the deprecated options. Managing NetApp ONTAP using SSLv2 and SSLv3 is no longer supported by libstorageMgmt The SSLv2 and SSLv3 connections to the NetApp ONTAP storage array are no longer supported by the libstorageMgmt library. Users can contact NetApp support to enable the Transport Layer Security (TLS) protocol. dconf-dbus-1 has been deprecated and dconf-editor is now delivered separately With this update, the dconf-dbus-1 API has been removed. However, the dconf-dbus-1 library has been backported to preserve binary compatibility. Red Hat recommends using the GDBus library instead of dconf-dbus-1 . The dconf-error.h file has been renamed to dconf-enums.h . In addition, the dconf Editor is now delivered in the separate dconf-editor package. FreeRADIUS no longer accepts Auth-Type := System The FreeRADIUS server no longer accepts the Auth-Type := System option for the rlm_unix authentication module. This option has been replaced by the use of the unix module in the authorize section of the configuration file. Deprecated Device Drivers The following device drivers continue to be supported until the end of life of Red Hat Enterprise Linux 7 but will likely not be supported in future major releases of this product and are not recommended for new deployments. 3w-9xxx 3w-sas aic79xx aoe arcmsr ata drivers: acard-ahci sata_mv sata_nv sata_promise sata_qstor sata_sil sata_sil24 sata_sis sata_svw sata_sx4 sata_uli sata_via sata_vsc bfa cxgb3 cxgb3i hptiop isci iw_cxgb3 mptbase mptctl mptsas mptscsih mptspi mtip32xx mvsas mvumi OSD drivers: osd libosd osst pata drivers: pata_acpi pata_ali pata_amd pata_arasan_cf pata_artop pata_atiixp pata_atp867x pata_cmd64x pata_cs5536 pata_hpt366 pata_hpt37x pata_hpt3x2n pata_hpt3x3 pata_it8213 pata_it821x pata_jmicron pata_marvell pata_netcell pata_ninja32 pata_oldpiix pata_pdc2027x pata_pdc202xx_old pata_piccolo pata_rdc pata_sch pata_serverworks pata_sil680 pata_sis pata_via pdc_adma pm80xx(pm8001) pmcraid qla3xxx stex sx8 ufshcd Deprecated Adapters The following adapters from the aacraid driver have been deprecated: PERC 2/Si (Iguana/PERC2Si), PCI ID 0x1028:0x0001 PERC 3/Di (Opal/PERC3Di), PCI ID 0x1028:0x0002 PERC 3/Si (SlimFast/PERC3Si), PCI ID 0x1028:0x0003 PERC 3/Di (Iguana FlipChip/PERC3DiF), PCI ID 0x1028:0x0004 PERC 3/Di (Viper/PERC3DiV), PCI ID 0x1028:0x0002 PERC 3/Di (Lexus/PERC3DiL), PCI ID 0x1028:0x0002 PERC 3/Di (Jaguar/PERC3DiJ), PCI ID 0x1028:0x000a PERC 3/Di (Dagger/PERC3DiD), PCI ID 0x1028:0x000a PERC 3/Di (Boxster/PERC3DiB), PCI ID 0x1028:0x000a catapult, PCI ID 0x9005:0x0283 tomcat, PCI ID 0x9005:0x0284 Adaptec 2120S (Crusader), PCI ID 0x9005:0x0285 Adaptec 2200S (Vulcan), PCI ID 0x9005:0x0285 Adaptec 2200S (Vulcan-2m), PCI ID 0x9005:0x0285 Legend S220 (Legend Crusader), PCI ID 0x9005:0x0285 Legend S230 (Legend Vulcan), PCI ID 0x9005:0x0285 Adaptec 3230S (Harrier), PCI ID 0x9005:0x0285 Adaptec 3240S (Tornado), PCI ID 0x9005:0x0285 ASR-2020ZCR SCSI PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285 ASR-2025ZCR SCSI SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285 ASR-2230S + ASR-2230SLP PCI-X (Lancer), PCI ID 0x9005:0x0286 ASR-2130S (Lancer), PCI ID 0x9005:0x0286 AAR-2820SA (Intruder), PCI ID 0x9005:0x0286 AAR-2620SA (Intruder), PCI ID 0x9005:0x0286 AAR-2420SA (Intruder), PCI ID 0x9005:0x0286 ICP9024RO (Lancer), PCI ID 0x9005:0x0286 ICP9014RO (Lancer), PCI ID 0x9005:0x0286 ICP9047MA (Lancer), PCI ID 0x9005:0x0286 ICP9087MA (Lancer), PCI ID 0x9005:0x0286 ICP5445AU (Hurricane44), PCI ID 0x9005:0x0286 ICP9085LI (Marauder-X), PCI ID 0x9005:0x0285 ICP5085BR (Marauder-E), PCI ID 0x9005:0x0285 ICP9067MA (Intruder-6), PCI ID 0x9005:0x0286 Themisto Jupiter Platform, PCI ID 0x9005:0x0287 Themisto Jupiter Platform, PCI ID 0x9005:0x0200 Callisto Jupiter Platform, PCI ID 0x9005:0x0286 ASR-2020SA SATA PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285 ASR-2025SA SATA SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285 AAR-2410SA PCI SATA 4ch (Jaguar II), PCI ID 0x9005:0x0285 CERC SATA RAID 2 PCI SATA 6ch (DellCorsair), PCI ID 0x9005:0x0285 AAR-2810SA PCI SATA 8ch (Corsair-8), PCI ID 0x9005:0x0285 AAR-21610SA PCI SATA 16ch (Corsair-16), PCI ID 0x9005:0x0285 ESD SO-DIMM PCI-X SATA ZCR (Prowler), PCI ID 0x9005:0x0285 AAR-2610SA PCI SATA 6ch, PCI ID 0x9005:0x0285 ASR-2240S (SabreExpress), PCI ID 0x9005:0x0285 ASR-4005, PCI ID 0x9005:0x0285 IBM 8i (AvonPark), PCI ID 0x9005:0x0285 IBM 8i (AvonPark Lite), PCI ID 0x9005:0x0285 IBM 8k/8k-l8 (Aurora), PCI ID 0x9005:0x0286 IBM 8k/8k-l4 (Aurora Lite), PCI ID 0x9005:0x0286 ASR-4000 (BlackBird), PCI ID 0x9005:0x0285 ASR-4800SAS (Marauder-X), PCI ID 0x9005:0x0285 ASR-4805SAS (Marauder-E), PCI ID 0x9005:0x0285 ASR-3800 (Hurricane44), PCI ID 0x9005:0x0286 Perc 320/DC, PCI ID 0x9005:0x0285 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046 Dell PERC2/QC, PCI ID 0x1011:0x0046 HP NetRAID-4M, PCI ID 0x1011:0x0046 Dell Catchall, PCI ID 0x9005:0x0285 Legend Catchall, PCI ID 0x9005:0x0285 Adaptec Catch All, PCI ID 0x9005:0x0285 Adaptec Rocket Catch All, PCI ID 0x9005:0x0286 Adaptec NEMER/ARK Catch All, PCI ID 0x9005:0x0288 The following adapters from the mpt2sas driver have been deprecated: SAS2004, PCI ID 0x1000:0x0070 SAS2008, PCI ID 0x1000:0x0072 SAS2108_1, PCI ID 0x1000:0x0074 SAS2108_2, PCI ID 0x1000:0x0076 SAS2108_3, PCI ID 0x1000:0x0077 SAS2116_1, PCI ID 0x1000:0x0064 SAS2116_2, PCI ID 0x1000:0x0065 SSS6200, PCI ID 0x1000:0x007E The following adapters from the megaraid_sas driver have been deprecated: Dell PERC5, PCI ID 0x1028:0x15 SAS1078R, PCI ID 0x1000:0x60 SAS1078DE, PCI ID 0x1000:0x7C SAS1064R, PCI ID 0x1000:0x411 VERDE_ZCR, PCI ID 0x1000:0x413 SAS1078GEN2, PCI ID 0x1000:0x78 SAS0079GEN2, PCI ID 0x1000:0x79 SAS0073SKINNY, PCI ID 0x1000:0x73 SAS0071SKINNY, PCI ID 0x1000:0x71 The following adapters from the qla2xxx driver have been deprecated: ISP24xx, PCI ID 0x1077:0x2422 ISP24xx, PCI ID 0x1077:0x2432 ISP2422, PCI ID 0x1077:0x5422 QLE220, PCI ID 0x1077:0x5432 QLE81xx, PCI ID 0x1077:0x8001 QLE10000, PCI ID 0x1077:0xF000 QLE84xx, PCI ID 0x1077:0x8044 QLE8000, PCI ID 0x1077:0x8432 QLE82xx, PCI ID 0x1077:0x8021 The following adapters from the qla4xxx driver have been deprecated: QLOGIC_ISP8022, PCI ID 0x1077:0x8022 QLOGIC_ISP8324, PCI ID 0x1077:0x8032 QLOGIC_ISP8042, PCI ID 0x1077:0x8042 The following Ethernet adapter controlled by the be2net driver has been deprecated: TIGERSHARK NIC, PCI ID 0x0700 The following adapters from the be2iscsi driver have been deprecated: Emulex OneConnect 10Gb iSCSI Initiator (generic), PCI ID 0x212 OCe10101, OCm10101, OCe10102, OCm10102 BE2 adapter family, PCI ID 0x702 OCe10100 BE2 adapter family, PCI ID 0x703 The following adapters from the lpfc driver have been deprecated: BladeEngine 2 (BE2) Devices TIGERSHARK FCOE, PCI ID 0x0704 Fibre Channel (FC) Devices FIREFLY, PCI ID 0x1ae5 PROTEUS_VF, PCI ID 0xe100 BALIUS, PCI ID 0xe131 PROTEUS_PF, PCI ID 0xe180 RFLY, PCI ID 0xf095 PFLY, PCI ID 0xf098 LP101, PCI ID 0xf0a1 TFLY, PCI ID 0xf0a5 BSMB, PCI ID 0xf0d1 BMID, PCI ID 0xf0d5 ZSMB, PCI ID 0xf0e1 ZMID, PCI ID 0xf0e5 NEPTUNE, PCI ID 0xf0f5 NEPTUNE_SCSP, PCI ID 0xf0f6 NEPTUNE_DCSP, PCI ID 0xf0f7 FALCON, PCI ID 0xf180 SUPERFLY, PCI ID 0xf700 DRAGONFLY, PCI ID 0xf800 CENTAUR, PCI ID 0xf900 PEGASUS, PCI ID 0xf980 THOR, PCI ID 0xfa00 VIPER, PCI ID 0xfb00 LP10000S, PCI ID 0xfc00 LP11000S, PCI ID 0xfc10 LPE11000S, PCI ID 0xfc20 PROTEUS_S, PCI ID 0xfc50 HELIOS, PCI ID 0xfd00 HELIOS_SCSP, PCI ID 0xfd11 HELIOS_DCSP, PCI ID 0xfd12 ZEPHYR, PCI ID 0xfe00 HORNET, PCI ID 0xfe05 ZEPHYR_SCSP, PCI ID 0xfe11 ZEPHYR_DCSP, PCI ID 0xfe12 To check the PCI IDs of the hardware on your system, run the lspci -nn command. Note that other adapters from the mentioned drivers that are not listed here remain unchanged. The libcxgb3 library and the cxgb3 firmware package have been deprecated The libcxgb3 library provided by the libibverbs package and the cxgb3 firmware package have been deprecated. They continue to be supported in Red Hat Enterprise Linux 7 but will likely not be supported in the major releases of this product. This change corresponds with the deprecation of the cxgb3 , cxgb3i , and iw_cxgb3 drivers listed above. SFN4XXX adapters have been deprecated Starting with Red Hat Enterprise Linux 7.4, SFN4XXX Solarflare network adapters have been deprecated. Previously, Solarflare had a single driver sfc for all adapters. Recently, support of SFN4XXX was split from sfc and moved into a new SFN4XXX-only driver, called sfc-falcon . Both drivers continue to be supported at this time, but sfc-falcon and SFN4XXX support is scheduled for removal in a future major release. Software-initiated-only FCoE storage technologies have been deprecated The software-initiated-only type of the Fibre Channel over Ethernet (FCoE) storage technology has been deprecated due to limited customer adoption. The software-initiated-only storage technology will remain supported for the life of Red Hat Enterprise Linux 7. The deprecation notice indicates the intention to remove software-initiated-based FCoE support in a future major release of Red Hat Enterprise Linux. It is important to note that the hardware support and the associated user-space tools (such as drivers, libfc , or libfcoe ) are unaffected by this deprecation notice. Containers using the libvirt-lxc tooling have been deprecated The following libvirt-lxc packages are deprecated since Red Hat Enterprise Linux 7.1: libvirt-daemon-driver-lxc libvirt-daemon-lxc libvirt-login-shell Future development on the Linux containers framework is now based on the docker command-line interface. libvirt-lxc tooling may be removed in a future release of Red Hat Enterprise Linux (including Red Hat Enterprise Linux 7) and should not be relied upon for developing custom container management applications. For more information, see the Red Hat KnowledgeBase article . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/chap-red_hat_enterprise_linux-7.5_release_notes-deprecated_functionality_in_rhel7 |
Chapter 13. Configuring NVDIMM Compute nodes to provide persistent memory for instances | Chapter 13. Configuring NVDIMM Compute nodes to provide persistent memory for instances A non-volatile dual in-line memory module (NVDIMM) is a technology that provides DRAM with persistent memory (PMEM). Standard computer memory loses its data after a loss of electrical power. The NVDIMM maintains its data even after a loss of electrical power. Instances that use PMEM can provide applications with the ability to load large contiguous segments of memory that persist the application data across power cycles. This is useful for high performance computing (HPC), which requests huge amounts of memory. As a cloud administrator, you can make the PMEM available to instances as virtual PMEM (vPMEM) by creating and configuring PMEM namespaces on Compute nodes that have NVDIMM hardware. Cloud users can then create instances that request vPMEM when they need the instance content to be retained after it is shut down. To enable your cloud users to create instances that use PMEM, you must complete the following procedures: Designate Compute nodes for PMEM. Configure the Compute nodes for PMEM that have the NVDIMM hardware. Deploy the overcloud. Create PMEM flavors for launching instances that have vPMEM. Tip If the NVDIMM hardware is limited, you can also configure a host aggregate to optimize scheduling on the PMEM Compute nodes. To schedule only instances that request vPMEM on the PMEM Compute nodes, create a host aggregate of the Compute nodes that have the NVDIMM hardware, and configure the Compute scheduler to place only PMEM instances on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates . Prerequisites Your Compute node has persistent memory hardware, such as Intel(R) OptaneTM DC Persistent Memory. You have configured backend NVDIMM regions on the PMEM hardware device to create PMEM namespaces. You can use the ipmctl tool provided by Intel to configure the PMEM hardware. Limitations when using PMEM devices You cannot cold migrate, live migrate, resize or suspend and resume instances that use vPMEM. Only instances running RHEL8 can use vPMEM. When rebuilding a vPMEM instance, the persistent memory namespaces are removed to restore the initial state of the instance. When resizing an instance using a new flavor, the content of the original virtual persistent memory will not be copied to the new virtual persistent memory. Virtual persistent memory hotplugging is not supported. When creating a snapshot of a vPMEM instance, the virtual persistent images are not included. 13.1. Designating Compute nodes for PMEM To designate Compute nodes for PMEM workloads, you must create a new role file to configure the PMEM role, and configure a new overcloud flavor and resource class for PMEM to use to tag the NVDIMM Compute nodes. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_pmem.yaml that includes the Controller , Compute , and ComputePMEM roles: Open roles_data_pmem.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputePMEM Role name name: Compute name: ComputePMEM description Basic Compute Node role PMEM Compute Node role HostnameFormatDefault %stackname%-novacompute-%index% %stackname%-novacomputepmem-%index% deprecated_nic_config_name compute.yaml compute-pmem.yaml Register the NVDIMM Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Director Installation and Usage guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Director Installation and Usage guide. Create the compute-pmem overcloud flavor for PMEM Compute nodes: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Note These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size. Tag each bare metal node that you want to designate for PMEM workloads with a custom PMEM resource class: Replace <node> with the ID of the bare metal node. Associate the compute-pmem flavor with the custom PMEM resource class: To determine the name of a custom resource class that corresponds to a resource class of a Bare Metal service node, convert the resource class to uppercase, replace all punctuation with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare metal flavor properties to schedule instances: Add the following parameters to the node-info.yaml file to specify the number of PMEM Compute nodes, and the flavor to use for the PMEM-designated Compute nodes: To verify that the role was created, enter the following command: 13.2. Configuring a PMEM Compute node To enable your cloud users to create instances that use vPMEM, you must configure the Compute nodes that have the NVDIMM hardware. Procedure Create a new Compute environment file for configuring NVDIMM Compute nodes, for example, env_pmem.yaml . To partition the NVDIMM regions into PMEM namespaces that the instances can use, add the NovaPMEMNamespaces role-specific parameter to the PMEM role in your Compute environment file, and set the value using the following format: Use the following suffixes to represent the size: "k" or "K" for KiB "m" or "M" for MiB "G" or "G" for GiB "t" or "T" for TiB For example, the following configuration creates four namespaces, three of size 6 GiB, and one of size 100 GiB: To map the PMEM namespaces to labels that can be used in a flavor, add the NovaPMEMMappings role-specific parameter to the PMEM role in your Compute environment file, and set the value using the following format: For example, the following configuration maps the three 6 GiB namespaces to the label "6GB", and the 100 GiB namespace to the label "LARGE": Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Create and configure the flavors that your cloud users can use to launch instances that have vPMEM. The following example creates a flavor that requests a small PMEM device, 6GB, as mapped in Step 3: Verification Create an instance using one of the PMEM flavors: Log in to the instance as a cloud user. For more information, see Connecting to an instance . List all the disk devices attached to the instance: The instance has vPMEM if one of the devices listed is NVDIMM. | [
"[stack@director ~]USD source ~/stackrc",
"(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_pmem.yaml Compute:ComputePMEM Compute Controller",
"(undercloud)USD openstack overcloud node introspect --all-manageable --provide",
"(undercloud)USD openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> compute-pmem",
"(undercloud)USD openstack baremetal node set --resource-class baremetal.PMEM <node>",
"(undercloud)USD openstack flavor set --property resources:CUSTOM_BAREMETAL_PMEM=1 compute-pmem",
"(undercloud)USD openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 compute-pmem",
"parameter_defaults: OvercloudComputePMEMFlavor: compute-pmem ComputePMEMCount: 3 #set to the no of NVDIMM devices you have",
"(undercloud)USD openstack overcloud profiles list",
"<size>:<namespace_name>[,<size>:<namespace_name>]",
"parameter_defaults: ComputePMEMParameters: NovaPMEMNamespaces: \"6G:ns0,6G:ns1,6G:ns2,100G:ns3\"",
"<label>:<namespace_name>[|<namespace_name>][,<label>:<namespace_name>[|<namespace_name>]].",
"parameter_defaults: ComputePMEMParameters: NovaPMEMNamespaces: \"6G:ns0,6G:ns1,6G:ns2,100G:ns3\" NovaPMEMMappings: \"6GB:ns0|ns1|ns2,LARGE:ns3\"",
"(undercloud)USD openstack overcloud deploy --templates -r /home/stack/templates/roles_data_pmem.yaml -e /home/stack/templates/node-info.yaml -e [your environment files] -e /home/stack/templates/env_pmem.yaml",
"(overcloud)USD openstack flavor create --vcpus 1 --ram 512 --disk 2 --property hw:pmem='6GB' small_pmem_flavor",
"(overcloud)USD openstack flavor list (overcloud)USD openstack server create --flavor small_pmem_flavor --image rhel8 pmem_instance",
"sudo fdisk -l /dev/pmem0"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-nvdimm-compute-nodes-to-provide-persistent-memory-for-instances_nvdimm |
Preface | Preface Red Hat OpenShift Data Foundation 4.8 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Azure clusters. Note Only internal OpenShift Data Foundation clusters are supported on Microsoft Azure. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the appropriate deployment process based on your requirement: Deploy OpenShift Data Foundation on Microsoft Azure Deploy standalone Multicloud Object Gateway component | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_microsoft_azure/preface-azure |
4.10. attr | 4.10. attr 4.10.1. RHBA-2011:1272 - attr bug fix update Updated attr packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. The attr packages provide extended attributes, which can be used to store system objects like capabilities of executables and access control lists, as well as user objects. Bug Fixes BZ# 651119 Prior to this update, the setfattr utility could not restore the original values of the attributes when the "getfattr -e text" or "getfattr --encoding=text" command was used to dump attributes with embedded null characters. This update fixes the encoding of these values in getfattr to prevent information loss. BZ# 665049 Prior to this update, the getfattr utility followed symbolic links to directories even if the "-h" or "--no-dereference" option was specified. Additionally, the description in the getfattr(1) man page that related to this functionality was misleading. This update fixes getfattr with the "-h" option so that it no longer follows the symbolic links and the related content of the getfattr(1) man page is now correct. BZ# 665050 Prior to this update, the getfattr utility did not return a non-zero exit code when an attribute specified in the "getfattr" command did not exist. This update fixes getfattr so that it now returns a non-zero exit code when an attribute does not exist. BZ# 674870 Prior to this update, supported methods for encoding values of the extended attributes were not properly described in the setfattr(1) man page. This update adds the appropriate descriptions of the encoding methods to the setfattr(1) man page. BZ# 702639 Prior to this update, the project web page address as stated in the package specification did not reflect the change of the upstream project web page address. This update corrects the project web page address in the package specification. BZ# 727307 Prior to this update, the attr library was built without support for read-only relocations (RELRO) flags. With this update, the library is now built with partial RELRO support. All users of attr are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/attr |
14.5.19. Displaying a URI for Connection to a Graphical Display | 14.5.19. Displaying a URI for Connection to a Graphical Display Running the virsh domdisplay command will output a URI which can then be used to connect to the graphical display of the domain via VNC, SPICE, or RDP. If the --include-password option is used, the SPICE channel password will be included in the URI. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-Domain_Commands-Displaying_a_URI_for_connection_to_a_graphical_display |
Chapter 5. Deploying the all-in-one Red Hat OpenStack Platform environment | Chapter 5. Deploying the all-in-one Red Hat OpenStack Platform environment Before you deploy your all-in-one Red Hat OpenStack Platform environment, ensure that your system is up to date: To deploy your all-in-one environment, complete the following steps: Log in to registry.redhat.io with your Red Hat credentials: Export the environment variables that the deployment command uses. In this example, deploy the all-in-one environment with the eth1 interface that has the IP address 192.168.25.2 on the management network: Run the deploy command. Ensure that you include all .yaml files relevant to your environment: After a successful deployment, you can use the clouds.yaml configuration file in the /home/USDUSER/.config/openstack directory to query and verify the OpenStack services: To access the dashboard, use the default username admin and the undercloud_admin_password from the ~/standalone-passwords.conf file: | [
"[stack@all-in-one]USD sudo dnf update [stack@all-in-one]USD sudo reboot",
"[stack@all-in-one]USD sudo podman login registry.redhat.io",
"[stack@all-in-one]USD export IP=192.168.25.2 [stack@all-in-one]USD export NETMASK=24 [stack@all-in-one]USD export INTERFACE=eth1",
"[stack@all-in-one]USD sudo openstack tripleo deploy --templates --local-ip=USDIP/USDNETMASK -e /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml -r /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml -e USDHOME/containers-prepare-parameters.yaml -e USDHOME/standalone_parameters.yaml --output-dir USDHOME --standalone",
"[stack@all-in-one]USD export OS_CLOUD=standalone [stack@all-in-one]USD openstack endpoint list",
"[stack@all-in-one]USD cat standalone-passwords.conf | grep undercloud_admin_password:"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/quick_start_guide/deploying-the-all-in-one-openstack-installation |
Data Grid REST API | Data Grid REST API Red Hat Data Grid 8.5 Configure and interact with the Data Grid REST API Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_rest_api/index |
Chapter 2. Managing user accounts using the command line | Chapter 2. Managing user accounts using the command line There are several stages in the user life cycle in IdM (Identity Management), including the following: Create user accounts Activate stage user accounts Preserve user accounts Delete active, stage, or preserved user accounts Restore preserved user accounts 2.1. User life cycle Identity Management (IdM) supports three user account states: Stage users are not allowed to authenticate. This is an initial state. Some of the user account properties required for active users cannot be set, for example, group membership. Active users are allowed to authenticate. All required user account properties must be set in this state. Preserved users are former active users that are considered inactive and cannot authenticate to IdM. Preserved users retain most of the account properties they had as active users, but they are not part of any user groups. You can delete user entries permanently from the IdM database. Important Deleted user accounts cannot be restored. When you delete a user account, all the information associated with the account is permanently lost. A new administrator can only be created by a user with administrator rights, such as the default admin user. If you accidentally delete all administrator accounts, the Directory Manager must create a new administrator manually in the Directory Server. Warning Do not delete the admin user. As admin is a pre-defined user required by IdM, this operation causes problems with certain commands. If you want to define and use an alternative admin user, disable the pre-defined admin user with ipa user-disable admin after you granted admin permissions to at least one different user. Warning Do not add local users to IdM. The Name Service Switch (NSS) always resolves IdM users and groups before resolving local users and groups. This means that, for example, IdM group membership does not work for local users. 2.2. Adding users using the command line You can add users as: Active - user accounts which can be actively used by their users. Stage - users cannot use these accounts. Create stage users if you want to prepare new user accounts. When users are ready to use their accounts, then you can activate them. The following procedure describes adding active users to the IdM server with the ipa user-add command. Similarly, you can create stage user accounts with the ipa stageuser-add command. Warning IdM automatically assigns a unique user ID (UID) to new user accounts. You can assign a UID manually by using the --uid=INT option with the ipa user-add command, but the server does not validate whether the UID number is unique. Consequently, multiple user entries might have the same UID number. A similar problem can occur with user private group IDs (GIDs) if you assign a GID to a user account manually by using the --gidnumber=INT option. To check if you have multiple user entries with the same ID, enter ipa user-find --uid=<uid> or ipa user-find --gidnumber=<gidnumber> . Red Hat recommends you do not have multiple entries with the same UIDs or GIDs. If you have objects with duplicate IDs, security identifiers (SIDs) are not generated correctly. SIDs are crucial for trusts between IdM and Active Directory and for Kerberos authentication to work correctly. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Add user login, user's first name, last name and optionally, you can also add their email address. IdM supports user names that can be described by the following regular expression: Note User names ending with the trailing dollar sign (USD) are supported to enable Samba 3.x machine support. If you add a user name containing uppercase characters, IdM automatically converts the name to lowercase when saving it. Therefore, IdM always requires to enter user names in lowercase when logging in. Additionally, it is not possible to add user names which differ only in letter casing, such as user and User . The default maximum length for user names is 32 characters. To change it, use the ipa config-mod --maxusername command. For example, to increase the maximum user name length to 64 characters: The ipa user-add command includes a lot of parameters. To list them all, use the ipa help command: For details about ipa help command, see What is the IPA help . You can verify if the new user account is successfully created by listing all IdM user accounts: This command lists all user accounts with details. Additional resources Strengthening Kerberos security with PAC information Are user/group collisions supported in Red Hat Enterprise Linux? (Red Hat Knowledgebase) Users without SIDs cannot log in to IdM after an upgrade 2.3. Activating users using the command line To activate a user account by moving it from stage to active, use the ipa stageuser-activate command. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Activate the user account with the following command: You can verify if the new user account is successfully created by listing all IdM user accounts: This command lists all user accounts with details. 2.4. Preserving users using the command line You can preserve a user account if you want to remove it, but keep the option to restore it later. To preserve a user account, use the --preserve option with the ipa user-del or ipa stageuser-del commands. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Preserve the user account with the following command: Note Despite the output saying the user account was deleted, it has been preserved. 2.5. Deleting users using the command line IdM (Identity Management) enables you to delete users permanently. You can delete: Active users with the following command: ipa user-del Stage users with the following command: ipa stageuser-del Preserved users with the following command: ipa user-del When deleting multiple users, use the --continue option to force the command to continue regardless of errors. A summary of the successful and failed operations is printed to the stdout standard output stream when the command completes. If you do not use --continue , the command proceeds with deleting users until it encounters an error, after which it stops and exits. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Delete the user account with the following command: The user account has been permanently deleted from IdM. 2.6. Restoring users using the command line You can restore a preserved users to: Active users: ipa user-undel Stage users: ipa user-stage Restoring a user account does not restore all of the account's attributes. For example, the user's password is not restored and must be set again. Prerequisites Administrator privileges for managing IdM or User Administrator role. Obtained a Kerberos ticket. For details, see Using kinit to log in to IdM manually . Procedure Open terminal and connect to the IdM server. Activate the user account with the following command: Alternatively, you can restore user accounts as staged: Verification You can verify if the new user account is successfully created by listing all IdM user accounts: This command lists all user accounts with details. | [
"ipa user-add user_login --first=first_name --last=last_name --email=email_address",
"[a-zA-Z0-9_.][a-zA-Z0-9_.-]{0,252}[a-zA-Z0-9_.USD-]?",
"ipa config-mod --maxusername=64 Maximum username length: 64",
"ipa help user-add",
"ipa user-find",
"ipa stageuser-activate user_login ------------------------- Stage user user_login activated -------------------------",
"ipa user-find",
"ipa user-del --preserve user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa user-del --continue user1 user2 user3",
"ipa user-del user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa user-undel user_login ------------------------------ Undeleted user account \"user_login\" ------------------------------",
"ipa user-stage user_login ------------------------------ Staged user account \"user_login\" ------------------------------",
"ipa user-find"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/managing-user-accounts-using-the-command-line_managing-users-groups-hosts |
Chapter 7. Bucket policies in the Multicloud Object Gateway | Chapter 7. Bucket policies in the Multicloud Object Gateway OpenShift Data Foundation supports AWS S3 bucket policies. Bucket policies allow you to grant users access permissions for buckets and the objects in them. 7.1. Introduction to bucket policies Bucket policies are an access policy option available for you to grant permission to your AWS S3 buckets and objects. Bucket policies use JSON-based access policy language. For more information about access policy language, see AWS Access Policy Language Overview . 7.2. Using bucket policies in Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Access to the Multicloud Object Gateway (MCG), see Chapter 2, Accessing the Multicloud Object Gateway with your applications A valid Multicloud Object Gateway user account. See Creating a user in the Multicloud Object Gateway for instructions to create a user account. Procedure To use bucket policies in the MCG: Create the bucket policy in JSON format. For example: Replace [email protected] with a valid Multicloud Object Gateway user account. Using AWS S3 client, use the put-bucket-policy command to apply the bucket policy to your S3 bucket: Replace ENDPOINT with the S3 endpoint. Replace MyBucket with the bucket to set the policy on. Replace BucketPolicy with the bucket policy JSON file. Add --no-verify-ssl if you are using the default self signed certificates. For example: For more information on the put-bucket-policy command, see the AWS CLI Command Reference for put-bucket-policy . Note The principal element specifies the user that is allowed or denied access to a resource, such as a bucket. Currently, Only NooBaa accounts can be used as principals. In the case of object bucket claims, NooBaa automatically create an account obc-account.<generated bucket name>@noobaa.io . Note Bucket policy conditions are not supported. Additional resources There are many available elements for bucket policies with regard to access permissions. For details on these elements and examples of how they can be used to control the access permissions, see AWS Access Policy Language Overview . For more examples of bucket policies, see AWS Bucket Policy Examples . OpenShift Data Foundation version 4.17 introduces the bucket policy elements NotPrincipal , NotAction , and NotResource . For more information on these elements, see IAM JSON policy elements reference . 7.3. Creating a user in the Multicloud Object Gateway Prerequisites A running OpenShift Data Foundation Platform. Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Execute the following command to create an MCG user account: <noobaa-account-name> Specify the name of the new MCG user account. --allow_bucket_create Allows the user to create new buckets. --allowed_buckets Sets the user's allowed bucket list (use commas or multiple flags). --default_resource Sets the default resource.The new buckets are created on this default resource (including the future ones). --full_permission Allows this account to access all existing and future buckets. Important You need to provide permission to access atleast one bucket or full permission to access all the buckets. | [
"{ \"Version\": \"NewVersion\", \"Statement\": [ { \"Sid\": \"Example\", \"Effect\": \"Allow\", \"Principal\": [ \"[email protected]\" ], \"Action\": [ \"s3:GetObject\" ], \"Resource\": [ \"arn:aws:s3:::john_bucket\" ] } ] }",
"aws --endpoint ENDPOINT --no-verify-ssl s3api put-bucket-policy --bucket MyBucket --policy file:// BucketPolicy",
"aws --endpoint https://s3-openshift-storage.apps.gogo44.noobaa.org --no-verify-ssl s3api put-bucket-policy -bucket MyBucket --policy file://BucketPolicy",
"noobaa account create <noobaa-account-name> [--allow_bucket_create=true] [--allowed_buckets=[]] [--default_resource=''] [--full_permission=false]"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_hybrid_and_multicloud_resources/bucket-policies-in-the-multicloud-object-gateway |
Chapter 2. Architectures | Chapter 2. Architectures Red Hat Enterprise Linux 8.0 is distributed with the kernel version 4.18.0-80, which provides support for the following architectures: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z Make sure you purchase the appropriate subscription for each architecture. For more information, see Get Started with Red Hat Enterprise Linux - additional architectures . For a list of available subscriptions, see Subscription Utilization on the Customer Portal. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.0_release_notes/architectures |
4.10. Suspending Activity on a File System | 4.10. Suspending Activity on a File System You can suspend write activity to a file system by using the gfs_tool freeze command. Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. The gfs_tool unfreeze command ends the suspension. Usage Start Suspension End Suspension MountPoint Specifies the file system. Examples This example suspends writes to file system /gfs . This example ends suspension of writes to file system /gfs . | [
"gfs_tool freeze MountPoint",
"gfs_tool unfreeze MountPoint",
"gfs_tool freeze /gfs",
"gfs_tool unfreeze /gfs"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/global_file_system/s1-manage-suspendfs |
Chapter 8. Using metrics with dashboards and alerts | Chapter 8. Using metrics with dashboards and alerts The Network Observability Operator uses the flowlogs-pipeline to generate metrics from flow logs. You can utilize these metrics by setting custom alerts and viewing dashboards. 8.1. Viewing Network Observability metrics dashboards On the Overview tab in the OpenShift Container Platform console, you can view the overall aggregated metrics of the network traffic flow on the cluster. You can choose to display the information by node, namespace, owner, pod, and service. You can also use filters and display options to further refine the metrics. Procedure In the web console Observe Dashboards , select the Netobserv dashboard. View network traffic metrics in the following categories, with each having the subset per node, namespace, source, and destination: Byte rates Packet drops DNS RTT Select the Netobserv/Health dashboard. View metrics about the health of the Operator in the following categories, with each having the subset per node, namespace, source, and destination. Flows Flows Overhead Flow rates Agents Processor Operator Infrastructure and Application metrics are shown in a split-view for namespace and workloads. 8.2. Predefined metrics Metrics generated by the flowlogs-pipeline are configurable in the spec.processor.metrics.includeList of the FlowCollector custom resource to add or remove metrics. 8.3. Network Observability metrics You can also create alerts by using the includeList metrics in Prometheus rules, as shown in the example "Creating alerts". When looking for these metrics in Prometheus, such as in the Console through Observe Metrics , or when defining alerts, all the metrics names are prefixed with netobserv_ . For example, netobserv_namespace_flows_total . Available metrics names are as follows: includeList metrics names Names followed by an asterisk * are enabled by default. namespace_egress_bytes_total namespace_egress_packets_total namespace_ingress_bytes_total namespace_ingress_packets_total namespace_flows_total * node_egress_bytes_total node_egress_packets_total node_ingress_bytes_total * node_ingress_packets_total node_flows_total workload_egress_bytes_total workload_egress_packets_total workload_ingress_bytes_total * workload_ingress_packets_total workload_flows_total PacketDrop metrics names When the PacketDrop feature is enabled in spec.agent.ebpf.features (with privileged mode), the following additional metrics are available: namespace_drop_bytes_total namespace_drop_packets_total * node_drop_bytes_total node_drop_packets_total workload_drop_bytes_total workload_drop_packets_total DNS metrics names When the DNSTracking feature is enabled in spec.agent.ebpf.features , the following additional metrics are available: namespace_dns_latency_seconds * node_dns_latency_seconds workload_dns_latency_seconds FlowRTT metrics names When the FlowRTT feature is enabled in spec.agent.ebpf.features , the following additional metrics are available: namespace_rtt_seconds * node_rtt_seconds workload_rtt_seconds Network events metrics names When NetworkEvents feature is enabled, this metric is available by default: namespace_network_policy_events_total 8.4. Creating alerts You can create custom alerting rules for the Netobserv dashboard metrics to trigger alerts when some defined conditions are met. Prerequisites You have access to the cluster as a user with the cluster-admin role or with view permissions for all projects. You have the Network Observability Operator installed. Procedure Create a YAML file by clicking the import icon, + . Add an alerting rule configuration to the YAML file. In the YAML sample that follows, an alert is created for when the cluster ingress traffic reaches a given threshold of 10 MBps per destination workload. apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-alerts namespace: openshift-monitoring spec: groups: - name: NetObservAlerts rules: - alert: NetObservIncomingBandwidth annotations: message: |- {{ USDlabels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ USDlabels.DstK8S_OwnerType }} {{ USDlabels.DstK8S_OwnerName }} ({{ USDlabels.DstK8S_Namespace }}). summary: "High incoming traffic." expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace="openshift-ingress"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 1 for: 30s labels: severity: warning 1 The netobserv_workload_ingress_bytes_total metric is enabled by default in spec.processor.metrics.includeList . Click Create to apply the configuration file to the cluster. 8.5. Custom metrics You can create custom metrics out of the flowlogs data using the FlowMetric API. In every flowlogs data that is collected, there are a number of fields labeled per log, such as source name and destination name. These fields can be leveraged as Prometheus labels to enable the customization of cluster information on your dashboard. 8.6. Configuring custom metrics by using FlowMetric API You can configure the FlowMetric API to create custom metrics by using flowlogs data fields as Prometheus labels. You can add multiple FlowMetric resources to a project to see multiple dashboard views. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select FlowMetric . In the Project: dropdown list, select the project of the Network Observability Operator instance. Click Create FlowMetric . Configure the FlowMetric resource, similar to the following sample configurations: Example 8.1. Generate a metric that tracks ingress bytes received from cluster external sources apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 spec: metricName: cluster_external_ingress_bytes_total 2 type: Counter 3 valueField: Bytes direction: Ingress 4 labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] 5 filters: 6 - field: SrcSubnetLabel matchType: Absence 1 The FlowMetric resources need to be created in the namespace defined in the FlowCollector spec.namespace , which is netobserv by default. 2 The name of the Prometheus metric, which in the web console appears with the prefix netobserv-<metricName> . 3 The type specifies the type of metric. The Counter type is useful for counting bytes or packets. 4 The direction of traffic to capture. If not specified, both ingress and egress are captured, which can lead to duplicated counts. 5 Labels define what the metrics look like and the relationship between the different entities and also define the metrics cardinality. For example, SrcK8S_Name is a high cardinality metric. 6 Refines results based on the listed criteria. In this example, selecting only the cluster external traffic is done by matching only flows where SrcSubnetLabel is absent. This assumes the subnet labels feature is enabled (via spec.processor.subnetLabels ), which is done by default. Verification Once the pods refresh, navigate to Observe Metrics . In the Expression field, type the metric name to view the corresponding result. You can also enter an expression, such as topk(5, sum(rate(netobserv_cluster_external_ingress_bytes_total{DstK8S_Namespace="my-namespace"}[2m])) by (DstK8S_HostName, DstK8S_OwnerName, DstK8S_OwnerType)) Example 8.2. Show RTT latency for cluster external ingress traffic apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-rtt namespace: netobserv 1 spec: metricName: cluster_external_ingress_rtt_seconds type: Histogram 2 valueField: TimeFlowRttNs direction: Ingress labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] filters: - field: SrcSubnetLabel matchType: Absence - field: TimeFlowRttNs matchType: Presence divider: "1000000000" 3 buckets: [".001", ".005", ".01", ".02", ".03", ".04", ".05", ".075", ".1", ".25", "1"] 4 1 The FlowMetric resources need to be created in the namespace defined in the FlowCollector spec.namespace , which is netobserv by default. 2 The type specifies the type of metric. The Histogram type is useful for a latency value ( TimeFlowRttNs ). 3 Since the Round-trip time (RTT) is provided as nanos in flows, use a divider of 1 billion to convert into seconds, which is standard in Prometheus guidelines. 4 The custom buckets specify precision on RTT, with optimal precision ranging between 5ms and 250ms. Verification Once the pods refresh, navigate to Observe Metrics . In the Expression field, you can type the metric name to view the corresponding result. 8.7. Creating metrics from nested or array fields in the Traffic flows table Important OVN Observability / Viewing NetworkEvents is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important OVN Observability and the ability to view and track network events is available only in OpenShift Container Platform 4.17 and 4.18. You can create a FlowMetric resource to generate metrics for nested or array fields in the Traffic flows table, such as Network events or Interfaces . The following example shows how to generate metrics from the Network events field for network policy events. Prerequsities Enable NetworkEvents feature . See the Additional resources for how to do this. A network policy specified. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select FlowMetric . In the Project dropdown list, select the project of the Network Observability Operator instance. Click Create FlowMetric . Create FlowMetric resources to add the following configurations: Configuration counting network policy events per policy name and namespace apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: network-policy-events namespace: netobserv spec: metricName: network_policy_events_total type: Counter labels: [NetworkEvents>Type, NetworkEvents>Namespace, NetworkEvents>Name, NetworkEvents>Action, NetworkEvents>Direction] 1 filters: - field: NetworkEvents>Feature value: acl flatten: [NetworkEvents] 2 remap: 3 "NetworkEvents>Type": type "NetworkEvents>Namespace": namespace "NetworkEvents>Name": name "NetworkEvents>Direction": direction 1 These labels represent the nested fields for Network Events from the Traffic flows table. Each network event has a specific type, namespace, name, action, and direction. You can alternatively specify the Interfaces if NetworkEvents is unavailable in your OpenShift Container Platform version. 2 Optional: You can choose to represent a field that contains a list of items as distinct items. 3 Optional: You can rename the fields in Prometheus. Verification In the web console, navigate to Observe Dashboards and scroll down to see the Network Policy tab. You should begin seeing metrics filter in based on the metric you created along with the network policy specifications. Important High cardinality can affect the memory usage of Prometheus. You can check whether specific labels have high cardinality in the Network Flows format reference . 8.8. Configuring custom charts using FlowMetric API You can generate charts for dashboards in the OpenShift Container Platform web console, which you can view as an administrator in the Dashboard menu by defining the charts section of the FlowMetric resource. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select FlowMetric . In the Project: dropdown list, select the project of the Network Observability Operator instance. Click Create FlowMetric . Configure the FlowMetric resource, similar to the following sample configurations: Example 8.3. Chart for tracking ingress bytes received from cluster external sources apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 # ... charts: - dashboardName: Main 2 title: External ingress traffic unit: Bps type: SingleStat queries: - promQL: "sum(rate(USDMETRIC[2m]))" legend: "" - dashboardName: Main 3 sectionName: External title: Top external ingress traffic per workload unit: Bps type: StackArea queries: - promQL: "sum(rate(USDMETRIC{DstK8S_Namespace!=\"\"}[2m])) by (DstK8S_Namespace, DstK8S_OwnerName)" legend: "{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}" # ... 1 The FlowMetric resources need to be created in the namespace defined in the FlowCollector spec.namespace , which is netobserv by default. Verification Once the pods refresh, navigate to Observe Dashboards . Search for the NetObserv / Main dashboard. View two panels under the NetObserv / Main dashboard, or optionally a dashboard name that you create: A textual single statistic showing the global external ingress rate summed across all dimensions A timeseries graph showing the same metric per destination workload For more information about the query language, refer to the Prometheus documentation . Example 8.4. Chart for RTT latency for cluster external ingress traffic apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 # ... charts: - dashboardName: Main 2 title: External ingress TCP latency unit: seconds type: SingleStat queries: - promQL: "histogram_quantile(0.99, sum(rate(USDMETRIC_bucket[2m])) by (le)) > 0" legend: "p99" - dashboardName: Main 3 sectionName: External title: "Top external ingress sRTT per workload, p50 (ms)" unit: seconds type: Line queries: - promQL: "histogram_quantile(0.5, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\"\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0" legend: "{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}" - dashboardName: Main 4 sectionName: External title: "Top external ingress sRTT per workload, p99 (ms)" unit: seconds type: Line queries: - promQL: "histogram_quantile(0.99, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\"\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0" legend: "{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}" # ... 1 The FlowMetric resources need to be created in the namespace defined in the FlowCollector spec.namespace , which is netobserv by default. 2 3 4 Using a different dashboardName creates a new dashboard that is prefixed with Netobserv . For example, Netobserv / <dashboard_name> . This example uses the histogram_quantile function to show p50 and p99 . You can show averages of histograms by dividing the metric, USDMETRIC_sum , by the metric, USDMETRIC_count , which are automatically generated when you create a histogram. With the preceding example, the Prometheus query to do this is as follows: promQL: "(sum(rate(USDMETRIC_sum{DstK8S_Namespace!=\"\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName) / sum(rate(USDMETRIC_count{DstK8S_Namespace!=\"\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName))*1000" Verification Once the pods refresh, navigate to Observe Dashboards . Search for the NetObserv / Main dashboard. View the new panel under the NetObserv / Main dashboard, or optionally a dashboard name that you create. For more information about the query language, refer to the Prometheus documentation . 8.9. Detecting SYN flooding using the FlowMetric API and TCP flags You can create an AlertingRule resouce to alert for SYN flooding. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select FlowMetric . In the Project dropdown list, select the project of the Network Observability Operator instance. Click Create FlowMetric . Create FlowMetric resources to add the following configurations: Configuration counting flows per destination host and resource, with TCP flags apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-destination spec: metricName: flows_with_flags_per_destination_total type: Counter labels: [SrcSubnetLabel,DstSubnetLabel,DstK8S_Name,DstK8S_Type,DstK8S_HostName,DstK8S_Namespace,Flags] Configuration counting flows per source host and resource, with TCP flags apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-source spec: metricName: flows_with_flags_per_source_total type: Counter labels: [DstSubnetLabel,SrcSubnetLabel,SrcK8S_Name,SrcK8S_Type,SrcK8S_HostName,SrcK8S_Namespace,Flags] Deploy the following AlertingRule resource to alert for SYN flooding: AlertingRule for SYN flooding apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-syn-alerts namespace: openshift-monitoring # ... spec: groups: - name: NetObservSYNAlerts rules: - alert: NetObserv-SYNFlood-in annotations: message: |- {{ USDlabels.job }}: incoming SYN-flood attack suspected to Host={{ USDlabels.DstK8S_HostName}}, Namespace={{ USDlabels.DstK8S_Namespace }}, Resource={{ USDlabels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: "Incoming SYN-flood" expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags="2"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 1 for: 15s labels: severity: warning app: netobserv - alert: NetObserv-SYNFlood-out annotations: message: |- {{ USDlabels.job }}: outgoing SYN-flood attack suspected from Host={{ USDlabels.SrcK8S_HostName}}, Namespace={{ USDlabels.SrcK8S_Namespace }}, Resource={{ USDlabels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: "Outgoing SYN-flood" expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags="2"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 2 for: 15s labels: severity: warning app: netobserv # ... 1 2 In this example, the threshold for the alert is 300 ; however, you can adapt this value empirically. A threshold that is too low might produce false-positives, and if it's too high it might miss actual attacks. Verification In the web console, click Manage Columns in the Network Traffic table view and click TCP flags . In the Network Traffic table view, filter on TCP protocol SYN TCPFlag . A large number of flows with the same byteSize indicates a SYN flood. Go to Observe Alerting and select the Alerting Rules tab. Filter on netobserv-synflood-in alert . The alert should fire when SYN flooding occurs. Additional resources Filtering eBPF flow data using a global rule Creating alerting rules for user-defined projects . Troubleshooting high cardinality metrics- Determining why Prometheus is consuming a lot of disk space | [
"apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-alerts namespace: openshift-monitoring spec: groups: - name: NetObservAlerts rules: - alert: NetObservIncomingBandwidth annotations: message: |- {{ USDlabels.job }}: incoming traffic exceeding 10 MBps for 30s on {{ USDlabels.DstK8S_OwnerType }} {{ USDlabels.DstK8S_OwnerName }} ({{ USDlabels.DstK8S_Namespace }}). summary: \"High incoming traffic.\" expr: sum(rate(netobserv_workload_ingress_bytes_total {SrcK8S_Namespace=\"openshift-ingress\"}[1m])) by (job, DstK8S_Namespace, DstK8S_OwnerName, DstK8S_OwnerType) > 10000000 1 for: 30s labels: severity: warning",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 spec: metricName: cluster_external_ingress_bytes_total 2 type: Counter 3 valueField: Bytes direction: Ingress 4 labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] 5 filters: 6 - field: SrcSubnetLabel matchType: Absence",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-rtt namespace: netobserv 1 spec: metricName: cluster_external_ingress_rtt_seconds type: Histogram 2 valueField: TimeFlowRttNs direction: Ingress labels: [DstK8S_HostName,DstK8S_Namespace,DstK8S_OwnerName,DstK8S_OwnerType] filters: - field: SrcSubnetLabel matchType: Absence - field: TimeFlowRttNs matchType: Presence divider: \"1000000000\" 3 buckets: [\".001\", \".005\", \".01\", \".02\", \".03\", \".04\", \".05\", \".075\", \".1\", \".25\", \"1\"] 4",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: network-policy-events namespace: netobserv spec: metricName: network_policy_events_total type: Counter labels: [NetworkEvents>Type, NetworkEvents>Namespace, NetworkEvents>Name, NetworkEvents>Action, NetworkEvents>Direction] 1 filters: - field: NetworkEvents>Feature value: acl flatten: [NetworkEvents] 2 remap: 3 \"NetworkEvents>Type\": type \"NetworkEvents>Namespace\": namespace \"NetworkEvents>Name\": name \"NetworkEvents>Direction\": direction",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress traffic unit: Bps type: SingleStat queries: - promQL: \"sum(rate(USDMETRIC[2m]))\" legend: \"\" - dashboardName: Main 3 sectionName: External title: Top external ingress traffic per workload unit: Bps type: StackArea queries: - promQL: \"sum(rate(USDMETRIC{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace, DstK8S_OwnerName)\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flowmetric-cluster-external-ingress-traffic namespace: netobserv 1 charts: - dashboardName: Main 2 title: External ingress TCP latency unit: seconds type: SingleStat queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket[2m])) by (le)) > 0\" legend: \"p99\" - dashboardName: Main 3 sectionName: External title: \"Top external ingress sRTT per workload, p50 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.5, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\" - dashboardName: Main 4 sectionName: External title: \"Top external ingress sRTT per workload, p99 (ms)\" unit: seconds type: Line queries: - promQL: \"histogram_quantile(0.99, sum(rate(USDMETRIC_bucket{DstK8S_Namespace!=\\\"\\\"}[2m])) by (le,DstK8S_Namespace,DstK8S_OwnerName))*1000 > 0\" legend: \"{{DstK8S_Namespace}} / {{DstK8S_OwnerName}}\"",
"promQL: \"(sum(rate(USDMETRIC_sum{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName) / sum(rate(USDMETRIC_count{DstK8S_Namespace!=\\\"\\\"}[2m])) by (DstK8S_Namespace,DstK8S_OwnerName))*1000\"",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-destination spec: metricName: flows_with_flags_per_destination_total type: Counter labels: [SrcSubnetLabel,DstSubnetLabel,DstK8S_Name,DstK8S_Type,DstK8S_HostName,DstK8S_Namespace,Flags]",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowMetric metadata: name: flows-with-flags-per-source spec: metricName: flows_with_flags_per_source_total type: Counter labels: [DstSubnetLabel,SrcSubnetLabel,SrcK8S_Name,SrcK8S_Type,SrcK8S_HostName,SrcK8S_Namespace,Flags]",
"apiVersion: monitoring.openshift.io/v1 kind: AlertingRule metadata: name: netobserv-syn-alerts namespace: openshift-monitoring spec: groups: - name: NetObservSYNAlerts rules: - alert: NetObserv-SYNFlood-in annotations: message: |- {{ USDlabels.job }}: incoming SYN-flood attack suspected to Host={{ USDlabels.DstK8S_HostName}}, Namespace={{ USDlabels.DstK8S_Namespace }}, Resource={{ USDlabels.DstK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Incoming SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_destination_total{Flags=\"2\"}[1m])) by (job, DstK8S_HostName, DstK8S_Namespace, DstK8S_Name) > 300 1 for: 15s labels: severity: warning app: netobserv - alert: NetObserv-SYNFlood-out annotations: message: |- {{ USDlabels.job }}: outgoing SYN-flood attack suspected from Host={{ USDlabels.SrcK8S_HostName}}, Namespace={{ USDlabels.SrcK8S_Namespace }}, Resource={{ USDlabels.SrcK8S_Name }}. This is characterized by a high volume of SYN-only flows with different source IPs and/or ports. summary: \"Outgoing SYN-flood\" expr: sum(rate(netobserv_flows_with_flags_per_source_total{Flags=\"2\"}[1m])) by (job, SrcK8S_HostName, SrcK8S_Namespace, SrcK8S_Name) > 300 2 for: 15s labels: severity: warning app: netobserv"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_observability/metrics-dashboards-alerts |
Chapter 2. Configuring hosted control planes | Chapter 2. Configuring hosted control planes To get started with hosted control planes for OpenShift Container Platform, you first configure your hosted cluster on the provider that you want to use. Then, you complete a few management tasks. Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can view the procedures by selecting from one of the following providers: 2.1. Amazon Web Services (AWS) Configuring the hosting cluster on AWS (Technology Preview) : The tasks to configure a hosted cluster on AWS include creating the AWS S3 OIDC secret, creating a routable public zone, enabling external DNS, enabling AWS PrivateLink, enabling the hosted control planes feature, and installing the hosted control planes CLI. Managing hosted control plane clusters on AWS (Technology Preview) : Management tasks include creating, importing, accessing, or deleting a hosted cluster on AWS. 2.2. Bare metal Configuring the hosting cluster on bare metal (Technology Preview) : Configure DNS before you create a hosted cluster. Managing hosted control plane clusters on bare metal (Technology Preview) : Create a hosted cluster, create an InfraEnv resource, add agents, access the hosted cluster, scale the NodePool object, handle Ingress, enable node auto-scaling, or delete a hosted cluster. 2.3. OpenShift Virtualization Managing hosted control plane clusters on OpenShift Virtualization (Technology Preview) : Create OpenShift Container Platform clusters with worker nodes that are hosted by KubeVirt virtual machines. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/hosted_control_planes/hcp-configuring |
Chapter 8. IO Scheduler and block IO Tapset | Chapter 8. IO Scheduler and block IO Tapset This family of probe points is used to probe block IO layer and IO scheduler activities. It contains the following probe points: | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/iosched.stp |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 8 release of Eclipse Temurin includes, see OpenJDK 8u432 Released . New features and enhancements Eclipse Temurin 8.0.432 includes the following new features and enhancements. TLS_ECDH_* cipher suites are disabled by default The TLS Elliptic-curve Diffie-Hellman (TLS_ECDH) cipher suites do not preserve forward secrecy and they are rarely used. OpenJDK 8.0.432 disables the TLS_ECDH cipher suites by adding the ECDH option to the jdk.tls.disabledAlgorithms security property in the java.security configuration file. If you attempt to use the TLS_ECDH cipher suites, OpenJDK now throws an SSLHandshakeException error. If you want to continue using the TLS_ECDH cipher suites, you can remove ECDH from the jdk.tls.disabledAlgorithms security property either by modifying the java.security configuration file or by using the java.security.properties system property. Note Continued use of the TLS_ECDH cipher suites is at your own risk. ECDH cipher suites that use RC4 were disabled in an earlier release. This change does not affect the TLS_ECDHE cipher suites, which remain enabled by default. See JDK-8279164 (JDK Bug System) . Distrust of TLS server certificates issued after 11 November 2024 and anchored by Entrust root CAs In accordance with similar plans that Google and Mozilla recently announced, OpenJDK 8.0.432 distrusts TLS certificates that are issued after 11 November 2024 and anchored by Entrust root certificate authorities (CAs). This change in behavior includes any certificates that are branded as AffirmTrust, which are managed by Entrust. OpenJDK will continue to trust certificates that are issued on or before 11 November 2024 until these certificates expire. If a server's certificate chain is anchored by an affected certificate, any attempts to negotiate a TLS session now fail with an exception to indicate that the trust anchor is not trusted. For example: You can check whether this change affects a certificate in a JDK keystore by using the following keytool command: keytool -v -list -alias <your_server_alias> -keystore <your_keystore_filename> If this change affects any certificate in the chain, update this certificate or contact the organization that is responsible for managing the certificate. If you want to continue using TLS server certificates that are anchored by Entrust root certificates, you can remove ENTRUST_TLS from the jdk.security.caDistrustPolicies security property either by modifying the java.security configuration file or by using the java.security.properties system property. Note Continued use of the distrusted TLS server certificates is at your own risk. These restrictions apply to the following Entrust root certificates that OpenJDK includes: Certificate 1 Alias name: entrustevca [jdk] Distinguished name: CN=Entrust Root Certification Authority, OU=(c) 2006 Entrust, Inc., OU=www.entrust.net/CPS is incorporated by reference, O=Entrust, Inc., C=US SHA256: 73:C1:76:43:4F:1B:C6:D5:AD:F4:5B:0E:76:E7:27:28:7C:8D:E5:76:16:C1:E6:E6:14:1A:2B:2C:BC:7D:8E:4C Certificate 2 Alias name: entrustrootcaec1 [jdk] Distinguished name: CN=Entrust Root Certification Authority - EC1, OU=(c) 2012 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-terms, O=Entrust, Inc., C=US SHA256: 02:ED:0E:B2:8C:14:DA:45:16:5C:56:67:91:70:0D:64:51:D7:FB:56:F0:B2:AB:1D:3B:8E:B0:70:E5:6E:DF:F5 Certificate 3 Alias name: entrustrootcag2 [jdk] Distinguished name: CN=Entrust Root Certification Authority - G2, OU=(c) 2009 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-terms, O=Entrust, Inc., C=US SHA256: 43:DF:57:74:B0:3E:7F:EF:5F:E4:0D:93:1A:7B:ED:F1:BB:2E:6B:42:73:8C:4E:6D:38:41:10:3D:3A:A7:F3:39 Certificate 4 Alias name: entrustrootcag4 [jdk] Distinguished name: CN=Entrust Root Certification Authority - G4, OU=(c) 2015 Entrust, Inc. - for authorized use only, OU=See www.entrust.net/legal-termsO=Entrust, Inc., C=US SHA256: DB:35:17:D1:F6:73:2A:2D:5A:B9:7C:53:3E:C7:07:79:EE:32:70:A6:2F:B4:AC:42:38:37:24:60:E6:F0:1E:88 Certificate 5 Alias name: entrust2048ca [jdk] Distinguished name: CN=Entrust.net Certification Authority (2048), OU=(c) 1999 Entrust.net Limited, OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.), O=Entrust.net SHA256: 6D:C4:71:72:E0:1C:BC:B0:BF:62:58:0D:89:5F:E2:B8:AC:9A:D4:F8:73:80:1E:0C:10:B9:C8:37:D2:1E:B1:77 Certificate 6 Alias name: affirmtrustcommercialca [jdk] Distinguished name: CN=AffirmTrust Commercial, O=AffirmTrust, C=US SHA256: 03:76:AB:1D:54:C5:F9:80:3C:E4:B2:E2:01:A0:EE:7E:EF:7B:57:B6:36:E8:A9:3C:9B:8D:48:60:C9:6F:5F:A7 Certificate 7 Alias name: affirmtrustnetworkingca [jdk] Distinguished name: CN=AffirmTrust Networking, O=AffirmTrust, C=US SHA256: 0A:81:EC:5A:92:97:77:F1:45:90:4A:F3:8D:5D:50:9F:66:B5:E2:C5:8F:CD:B5:31:05:8B:0E:17:F3:F0B4:1B Certificate 8 Alias name: affirmtrustpremiumca [jdk] Distinguished name: CN=AffirmTrust Premium, O=AffirmTrust, C=US SHA256: 70:A7:3F:7F:37:6B:60:07:42:48:90:45:34:B1:14:82:D5:BF:0E:69:8E:CC:49:8D:F5:25:77:EB:F2:E9:3B:9A Certificate 9 Alias name: affirmtrustpremiumeccca [jdk] Distinguished name: CN=AffirmTrust Premium ECCO=AffirmTrust, C=US SHA256: BD:71:FD:F6:DA:97:E4:CF:62:D1:64:7A:DD:25:81:B0:7D:79:AD:F8:39:7E:B4:EC:BA:9C:5E:84:88:82:14:23 See JDK-8337664 (JDK Bug System) and JDK-8341059 (JDK Bug System) . SSL.com root certificates added In OpenJDK 8.0.432, the cacerts truststore includes two SSL.com TLS root certificates: Certificate 1 Name: SSL.com Alias name: ssltlsrootecc2022 Distinguished name: CN=SSL.com TLS ECC Root CA 2022, O=SSL Corporation, C=US Certificate 2 Name: SSL.com Alias name: ssltlsrootrsa2022 Distinguished name: CN=SSL.com TLS RSA Root CA 2022, O=SSL Corporation, C=US See JDK-8341057 (JDK Bug System) . Relaxation of Java Abstract Window Toolkit (AWT) Robot specification OpenJDK 8.0.432 is based on the latest maintenance release of the Java 8 specification. This release relaxes the specification of the following three methods in the java.awt.Robot class: mouseMove(int,int) getPixelColor(int,int) createScreenCapture(Rectangle) This relaxation of the specification allows these methods to fail when the desktop environment does not permit moving the mouse pointer or capturing screen content. See JDK-8307779 (JDK Bug System) . Changes to com.sun.jndi.ldap.object.trustSerialData system property The JDK implementation of the LDAP provider no longer supports the deserialization of Java objects by default. In OpenJDK 8.0.432, the com.sun.jndi.ldap.object.trustSerialData system property is set to false by default. This release also increases the scope of the com.sun.jndi.ldap.object.trustSerialData property to cover the reconstruction of RMI remote objects from the javaRemoteLocation LDAP attribute. These changes mean that transparent deserialization of Java objects now requires an explicit opt-in. From OpenJDK 8.0.432 onward, if you want to allow applications to reconstruct Java objects and RMI stubs from LDAP attributes, you must explicitly set the com.sun.jndi.ldap.object.trustSerialData property to true . See JDK-8290367 (JDK Bug System) and JDK bug system reference ID: JDK-8332643. HTTP client enhancements OpenJDK 8.0.432 limits the maximum header field size that the HTTP client accepts within the JDK for all supported versions of the HTTP protocol. The header field size is computed as the sum of the size of the uncompressed header name, the size of the uncompressed header value, and an overhead of 32 bytes for each field section line. If a peer sends a field section that exceeds this limit, a java.net.ProtocolException is raised. OpenJDK 8.0.432 introduces a jdk.http.maxHeaderSize system property that you can use to change the maximum header field size (in bytes). Alternatively, you can disable the maximum header field size by setting the jdk.http.maxHeaderSize property to zero or a negative value. The jdk.http.maxHeaderSize property is set to 393,216 bytes (that is, 384KB) by default. JDK bug system reference ID: JDK-8328286. Fix for infinite loop in ZipOutputStream.close() method In earlier releases, when an exception was thrown during closure, the DeflaterOutputStream.close() , GZIPOutputStream.finish() , and ZipOutputStream.closeEntry() methods did not close the associated default JDK compressor. OpenJDK 8.0.432 resolves this issue by ensuring that the default compressor is closed before propagating the Throwable object up the call stack. In the case of ZipOutputStream , the default compressor is closed only when the exception is not a ZipException . See JDK-8193682 (JDK Bug System) . Revised on 2024-10-30 09:27:28 UTC | [
"TLS server certificate issued after 2024-11-11 and anchored by a distrusted legacy Entrust root CA: CN=Entrust.net CertificationAuthority (2048), OU=(c) 1999 Entrust.net Limited,OU=www.entrust.net/CPS_2048 incorp. by ref. (limits liab.),O=Entrust.net"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/eclipse_temurin_8.0.432_release_notes/openjdk-temurin-features-8.0.432_openjdk |
Chapter 2. New features | Chapter 2. New features Cryostat 2.3 introduces new features that enhance your use of the Cryostat product. Connection to GraalVM Native Images From Cryostat 2.3.1 onward, Cryostat can connect to GraalVM Native Images over Java Management Extensions (JMX). This feature requires that the target applications are built with GraalVM or Mandrel 23.0 or later. You must also ensure that the target applications are correctly configured to support connections to Cryostat over JMX. Cross-namespace target application discovery In Cryostat 2.3, you can deploy a Cryostat instance that services multiple Red Hat OpenShift namespaces. The Red Hat build of Cryostat Operator introduces the Cluster Cryostat API, which you can use to create Cryostat instances that work across multiple namespaces. Users who can access the multi-namespace Cryostat instance have access to all target applications in any namespace that is visible to that Cryostat instance. Therefore, when you deploy a multi-namespace Cryostat instance, you must consider which namespaces to select for monitoring, which namespace to install Cryostat into, and the users to which you want to grant access. Topology view In Cryostat 2.3, a new view is introduced to help you view and perform actions on your target Java Virtual Machines (JVM)s. The Topology view provides a visual representation of the target JVMs and all the resources that are associated with those JVMs. You can view your deployment scenarios on Cryostat and verify that your deployments are as you expect. You can customize the Topology view according to your requirements. For example, you can switch from a graph display to a list display and use filters to specify the information that you want to show. You can also use the Topology view to perform actions on one or more target applications at the same time. You can select a pod or a deployment and, by clicking the Actions menu, you can run various actions, for example, starting or stopping recordings, on all target applications in the selection. QuickStart guides and guided tour In Cryostat 2.3, Quick Start guides and an interactive guided tour are delivered to help you get started with Cryostat and learn more about its features and capabilities. The Quick Start guides provide step-by-step instructions on how to perform tasks, such as starting a recording, using the Cryostat dashboard, or getting started with automated rules. Cryostat 2.3 also delivers an interactive guided tour to help you navigate the Cryostat user interface (UI) and learn how to perform specific tasks. The tour guides you through the Cryostat UI, the navigation menus and the available functions. The tour opens automatically the first time you start Cryostat 2.3, however, you can skip or restart the tour at any time. You access the Quick Start guides and the guided tour from the upper right corner of the Cryostat UI by clicking the "?" icon. Cryostat agent Cryostat 2.3 introduces a new Java instrumentation agent to help you detect and monitor your target JVM applications. The Cryostat agent is an HTTP API that acts as a plug-in for applications that run on the JVM and retrieves a wide range of information from the applications for analysis by Cryostat. Previously, Cryostat required target applications to expose a Java Management Extensions (JMX) port. Cryostat then communicated with the application JVMs over this JMX port to start and stop Java Flight Recorder (JFR) recordings and to pull JFR data over the network. The new Cryostat agent retrieves JFR data from the JVM and sends it back to Cryostat over HTTP. The agent exposes only a small, read-only HTTP API, making it easier to audit and secure than a JMX port. Customizable features on the Cryostat dashboard Cryostat 2.3 introduces several new features to enhance the Cryostat dashboard to help users view important information about their target JVMs. The following new features are delivered: Dashboard cards to display information and metrics about your connected JVMs in a chart format. You can switch between different card configurations, which enable you to quickly access and analyze the most relevant data. Layout templates to help you customize the dashboard layout according to your requirements. You can create custom views that highlight the specific metrics or information that you want to focus on. You can also switch between different views and do not need to modify your dashboard each time you want to view different information. Ability to view static and dynamic information about your connected JVMs New dashboard card view for displaying and accessing Automated analysis reports. With these new features, you can better customize the dashboard to suit your requirements and have more flexibility in monitoring and analyzing your Java applications. Label to denote beta features In Cryostat 2.3, a new label is available to highlight beta features. You can use this label if you want to be aware of and preview any beta features. A beta feature might not yet be functionally complete and ready to use in production. To be able to preview beta features, you can set this flag from the Settings view. On the Settings view, click Advanced and, from the Feature level menu, select Beta . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.3/cryostat-new-features_cryostat |
Chapter 9. Scanning pod images with the Container Security Operator | Chapter 9. Scanning pod images with the Container Security Operator The Container Security Operator (CSO) is an addon for the Clair security scanner available on OpenShift Container Platform and other Kubernetes platforms. With the CSO, users can scan container images associated with active pods for known vulnerabilities. Note The CSO does not work without Red Hat Quay and Clair. The Container Security Operator (CSO) includes the following features: Watches containers associated with pods on either specified or all namespaces. Queries the container registry where the containers came from for vulnerability information, provided that an image's registry supports image scanning, such a a Red Hat Quay registry with Clair scanning. Exposes vulnerabilities through the ImageManifestVuln object in the Kubernetes API. Note To see instructions on installing the CSO on Kubernetes, select the Install button from the Container Security OperatorHub.io page. 9.1. Downloading and running the Container Security Operator in OpenShift Container Platform Use the following procedure to download the Container Security Operator (CSO). Note In the following procedure, the CSO is installed in the marketplace-operators namespace. This allows the CSO to be used in all namespaces of your OpenShift Container Platform cluster. Procedure On the OpenShift Container Platform console page, select Operators OperatorHub and search for Container Security Operator . Select the Container Security Operator, then select Install to go to the Create Operator Subscription page. Check the settings (all namespaces and automatic approval strategy, by default), and select Subscribe . The Container Security appears after a few moments on the Installed Operators screen. Optional: you can add custom certificates to the CSO. In this example, create a certificate named quay.crt in the current directory. Then, run the following command to add the certificate to the CSO: USD oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators Note You must restart the Operator pod for the new certificates to take effect. Navigate to Home Overview . A link to Image Vulnerabilities appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a security breakdown, as shown in the following image: Important The Container Security Operator currently provides broken links for Red Hat Security advisories. For example, the following link might be provided: https://access.redhat.com/errata/RHSA-2023:1842%20https://access.redhat.com/security/cve/CVE-2023-23916 . The %20 in the URL represents a space character, however it currently results in the combination of the two URLs into one incomplete URL, for example, https://access.redhat.com/errata/RHSA-2023:1842 and https://access.redhat.com/security/cve/CVE-2023-23916 . As a temporary workaround, you can copy each URL into your browser to navigate to the proper page. This is a known issue and will be fixed in a future version of Red Hat Quay. You can do one of two things at this point to follow up on any detected vulnerabilities: Select the link to the vulnerability. You are taken to the container registry, Red Hat Quay or other registry where the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a Quay.io registry: Select the namespaces link to go to the Image Manifest Vulnerabilities page, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in two namespaces: After executing this procedure, you are made aware of what images are vulnerable, what you must do to fix those vulnerabilities, and every namespace that the image was run in. Knowing this, you can perform the following actions: Alert users who are running the image that they need to correct the vulnerability. Stop the images from running by deleting the deployment or the object that started the pod that the image is in. Note If you delete the pod, it might take a few minutes for the vulnerability to reset on the dashboard. 9.2. Querying image vulnerabilities from the CLI Use the following procedure to query image vulnerabilities from the command line interface (CLI). Procedure Enter the following command to query for detected vulnerabilities: USD oc get vuln --all-namespaces Example output NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s Optional. To display details for a particular vulnerability, identify a specific vulnerability and its namespace, and use the oc describe command. The following example shows an active container whose image includes an RPM package with a vulnerability: USD oc describe vuln --namespace <namespace> sha256.ac50e3752... Example output Name: sha256.ac50e3752... Namespace: quay-enterprise ... Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries... 9.3. Uninstalling the Container Security Operator To uninstall the Container Security Operator from your OpenShift Container Platform deployment, you must uninstall the Operator and delete the imagemanifestvulns.secscan.quay.redhat.com custom resource definition (CRD). Without removing the CRD, image vulnerabilities are still reported on the OpenShift Container Platform Overview page. Procedure On the OpenShift Container Platform web console, click Operators Installed Operators . Click the menu kebab of the Container Security Operator. Click Uninstall Operator . Confirm your decision by clicking Uninstall in the popup window. Remove the imagemanifestvulns.secscan.quay.redhat.com custom resource definition by entering the following command: USD oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com Example output customresourcedefinition.apiextensions.k8s.io "imagemanifestvulns.secscan.quay.redhat.com" deleted | [
"oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators",
"oc get vuln --all-namespaces",
"NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s",
"oc describe vuln --namespace <namespace> sha256.ac50e3752",
"Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries",
"oc delete customresourcedefinition imagemanifestvulns.secscan.quay.redhat.com",
"customresourcedefinition.apiextensions.k8s.io \"imagemanifestvulns.secscan.quay.redhat.com\" deleted"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_operator_features/container-security-operator-setup |
D.17. SQL Reserved Words View | D.17. SQL Reserved Words View To open Teiid Designer's SQL Reserved Words View , click the main menu's Window > Show View > Other... and then click the Teiid Designer > SQL Reserved Words view in the dialog. Figure D.28. System Catalog View You can also display the view by selecting the main menu's Metadata > Show SQL Reserved Words action as shown below. Figure D.29. SQL Reserved Words Action | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/sql_reserved_words_view |
function::read_stopwatch_ns | function::read_stopwatch_ns Name function::read_stopwatch_ns - Reads the time in nanoseconds for a stopwatch Synopsis Arguments name stopwatch name Description Returns time in nanoseconds for stopwatch name . Creates stopwatch name if it does not currently exist. | [
"read_stopwatch_ns:long(name:string)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-read-stopwatch-ns |
Chapter 4. General Updates | Chapter 4. General Updates New variable for disabling colored output for systemd This update introduces the SYSTEMD_COLORS environment variable for systemd , which enables turning on or off systemd color output. SYSTEMD_COLORS should be set to a valid boolean value. (BZ# 1265749 ) systemd units can now be enabled using aliases The systemd init system uses aliases. Aliases are symbolic links to the service files, and can be used in commands instead of the actual names of services. For example, the package providing the /usr/lib/systemd/system/nfs-server.service service file also provides an alias /usr/lib/systemd/system/nfs.service , which is a symbolic link to the nfs-server.service . This enables, for example, using the systemctl status nfs.service command instead of systemctl status nfs-server.service . Previously, running the systemctl enable command using an alias instead of the real service name failed with an error. With this update, the bug is fixed, and systemctl enable successfully enables units referred to by their aliases. (BZ#1142378) New systemd option: RandomizedDelaySec This update introduces the RandomizedDelaySec option for systemd timers, which schedules an event to occur later by a random number of seconds. For example, setting the option to 10 will postpone the event by a random number of seconds between 0 and 10. The new option is useful for spreading workload over a longer time period to avoid several events executing at the same time. (BZ# 1305279 ) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/new_features_general_updates |
Chapter 5. Troubleshooting Ceph OSDs | Chapter 5. Troubleshooting Ceph OSDs This chapter contains information on how to fix the most common errors related to Ceph OSDs. 5.1. Prerequisites Verify your network connection. See Troubleshooting networking issues for details. Verify that Monitors have a quorum by using the ceph health command. If the command returns a health status ( HEALTH_OK , HEALTH_WARN , or HEALTH_ERR ), the Monitors are able to form a quorum. If not, address any Monitor problems first. See Troubleshooting Ceph Monitors for details. For details about ceph health see Understanding Ceph health . Optionally, stop the rebalancing process to save time and resources. See Stopping and starting rebalancing for details. 5.2. Most common Ceph OSD errors The following tables list the most common error messages that are returned by the ceph health detail command, or included in the Ceph logs. The tables provide links to corresponding sections that explain the errors and point to specific procedures to fix the problems. 5.2.1. Prerequisites Root-level access to the Ceph OSD nodes. 5.2.2. Ceph OSD error messages A table of common Ceph OSD error messages, and a potential fix. Error message See HEALTH_ERR full osds Full OSDs HEALTH_WARN backfillfull osds Backfillfull OSDS nearfull osds Nearfull OSDs osds are down Down OSDs Flapping OSDs requests are blocked Slow request or requests are blocked slow requests Slow request or requests are blocked 5.2.3. Common Ceph OSD error messages in the Ceph logs A table of common Ceph OSD error messages found in the Ceph logs, and a link to a potential fix. Error message Log file See heartbeat_check: no reply from osd.X Main cluster log Flapping OSDs wrongly marked me down Main cluster log Flapping OSDs osds have slow requests Main cluster log Slow request or requests are blocked FAILED assert(0 == "hit suicide timeout") OSD log Down OSDs 5.2.4. Full OSDs The ceph health detail command returns an error message similar to the following one: What This Means Ceph prevents clients from performing I/O operations on full OSD nodes to avoid losing data. It returns the HEALTH_ERR full osds message when the cluster reaches the capacity set by the mon_osd_full_ratio parameter. By default, this parameter is set to 0.95 which means 95% of the cluster capacity. To Troubleshoot This Problem Determine how many percent of raw storage ( %RAW USED ) is used: If %RAW USED is above 70-75%, you can: Delete unnecessary data. This is a short-term solution to avoid production downtime. Scale the cluster by adding a new OSD node. This is a long-term solution recommended by Red Hat. Additional Resources Nearfull OSDs in the Red Hat Ceph Storage Troubleshooting Guide . See Deleting data from a full storage cluster for details. 5.2.5. Backfillfull OSDs The ceph health detail command returns an error message similar to the following one: What this means When one or more OSDs has exceeded the backfillfull threshold, Ceph prevents data from rebalancing to this device. This is an early warning that rebalancing might not complete and that the cluster is approaching full. The default for the backfullfull threshold is 90%. To troubleshoot this problem Check utilization by pool: If %RAW USED is above 70-75%, you can carry out one of the following actions: Delete unnecessary data. This is a short-term solution to avoid production downtime. Scale the cluster by adding a new OSD node. This is a long-term solution recommended by Red Hat. Increase the backfillfull ratio for the OSDs that contain the PGs stuck in backfull_toofull to allow the recovery process to continue. Add new storage to the cluster as soon as possible or remove data to prevent filling more OSDs. Syntax The range for VALUE is 0.0 to 1.0. Example Additional Resources Nearfull OSDS in the Red Hat Ceph Storage Troubleshooting Guide . See Deleting data from a full storage cluster for details. 5.2.6. Nearfull OSDs The ceph health detail command returns an error message similar to the following one: What This Means Ceph returns the nearfull osds message when the cluster reaches the capacity set by the mon osd nearfull ratio defaults parameter. By default, this parameter is set to 0.85 which means 85% of the cluster capacity. Ceph distributes data based on the CRUSH hierarchy in the best possible way but it cannot guarantee equal distribution. The main causes of the uneven data distribution and the nearfull osds messages are: The OSDs are not balanced among the OSD nodes in the cluster. That is, some OSD nodes host significantly more OSDs than others, or the weight of some OSDs in the CRUSH map is not adequate to their capacity. The Placement Group (PG) count is not proper as per the number of the OSDs, use case, target PGs per OSD, and OSD utilization. The cluster uses inappropriate CRUSH tunables. The back-end storage for OSDs is almost full. To Troubleshoot This Problem: Verify that the PG count is sufficient and increase it if needed. Verify that you use CRUSH tunables optimal to the cluster version and adjust them if not. Change the weight of OSDs by utilization. Determine how much space is left on the disks used by OSDs. To view how much space OSDs use on particular nodes, use the following command: After checking which OSD is used most on which host (full/nearfull), view which disk is being used as an underlying disk for the OSD. Log in to the node containing nearfull OSDs, and run the below commands: To view the list of pools present on the cluster: If needed, add a new OSD node. Additional Resources Full OSDs See the Set an OSD's Weight by Utilization section in the Storage Strategies guide for Red Hat Ceph Storage 5. For details, see the CRUSH Tunables section in the Storage Strategies guide for Red Hat Ceph Storage 5 and the How can I test the impact CRUSH map tunable modifications will have on my PG distribution across OSDs in Red Hat Ceph Storage? solution on the Red Hat Customer Portal. See Increasing the placement group for details. 5.2.7. Down OSDs The ceph health detail command returns an error similar to the following one: What This Means One of the ceph-osd processes is unavailable due to a possible service failure or problems with communication with other OSDs. As a consequence, the surviving ceph-osd daemons reported this failure to the Monitors. If the ceph-osd daemon is not running, the underlying OSD drive or file system is either corrupted, or some other error, such as a missing keyring, is preventing the daemon from starting. In most cases, networking issues cause the situation when the ceph-osd daemon is running but still marked as down . To Troubleshoot This Problem Determine which OSD is down : Try to restart the ceph-osd daemon. Replace the OSD_ID with the ID of the OSD that is down: Syntax Example If you are not able start ceph-osd , follow the steps in The ceph-osd daemon cannot start . If you are able to start the ceph-osd daemon but it is marked as down , follow the steps in The ceph-osd daemon is running but still marked as `down` . The ceph-osd daemon cannot start If you have a node containing a number of OSDs (generally, more than twelve), verify that the default maximum number of threads (PID count) is sufficient. See Increasing the PID count for details. Verify that the OSD data and journal partitions are mounted properly. You can use the ceph-volume lvm list command to list all devices and volumes associated with the Ceph Storage Cluster and then manually inspect if they are mounted properly. See the mount(8) manual page for details. If you got the ERROR: missing keyring, cannot use cephx for authentication error message, the OSD is a missing keyring. If you got the ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-1 error message, the ceph-osd daemon cannot read the underlying file system. See the following steps for instructions on how to troubleshoot and fix this error. Check the corresponding log file to determine the cause of the failure. By default, Ceph stores log files in the /var/log/ceph/ CLUSTER_FSID / directory after the logging to files is enabled. An EIO error message indicates a failure of the underlying disk. To fix this problem replace the underlying OSD disk. See Replacing an OSD drive for details. If the log includes any other FAILED assert errors, such as the following one, open a support ticket. See Contacting Red Hat Support for service for details. Check the dmesg output for the errors with the underlying file system or disk: If the dmesg output includes any SCSI error error messages, see the SCSI Error Codes Solution Finder solution on the Red Hat Customer Portal to determine the best way to fix the problem. Alternatively, if you are unable to fix the underlying file system, replace the OSD drive. See Replacing an OSD drive for details. If the OSD failed with a segmentation fault, such as the following one, gather the required information and open a support ticket. See Contacting Red Hat Support for service for details. The ceph-osd is running but still marked as down Check the corresponding log file to determine the cause of the failure. By default, Ceph stores log files in the /var/log/ceph/ CLUSTER_FSID / directory after the logging to files is enabled. If the log includes error messages similar to the following ones, see Flapping OSDs . If you see any other errors, open a support ticket. See Contacting Red Hat Support for service for details. Additional Resources Flapping OSDs Stale placement groups See the Ceph daemon logs to enable logging to files. 5.2.8. Flapping OSDs The ceph -w | grep osds command shows OSDs repeatedly as down and then up again within a short period of time: In addition the Ceph log contains error messages similar to the following ones: What This Means The main causes of flapping OSDs are: Certain storage cluster operations, such as scrubbing or recovery, take an abnormal amount of time, for example, if you perform these operations on objects with a large index or large placement groups. Usually, after these operations finish, the flapping OSDs problem is solved. Problems with the underlying physical hardware. In this case, the ceph health detail command also returns the slow requests error message. Problems with the network. Ceph OSDs cannot manage situations where the private network for the storage cluster fails, or significant latency is on the public client-facing network. Ceph OSDs use the private network for sending heartbeat packets to each other to indicate that they are up and in . If the private storage cluster network does not work properly, OSDs are unable to send and receive the heartbeat packets. As a consequence, they report each other as being down to the Ceph Monitors, while marking themselves as up . The following parameters in the Ceph configuration file influence this behavior: Parameter Description Default value osd_heartbeat_grace_time How long OSDs wait for the heartbeat packets to return before reporting an OSD as down to the Ceph Monitors. 20 seconds mon_osd_min_down_reporters How many OSDs must report another OSD as down before the Ceph Monitors mark the OSD as down 2 This table shows that in the default configuration, the Ceph Monitors mark an OSD as down if only one OSD made three distinct reports about the first OSD being down . In some cases, if one single host encounters network issues, the entire cluster can experience flapping OSDs. This is because the OSDs that reside on the host will report other OSDs in the cluster as down . Note The flapping OSDs scenario does not include the situation when the OSD processes are started and then immediately killed. To Troubleshoot This Problem Check the output of the ceph health detail command again. If it includes the slow requests error message, see for details on how to troubleshoot this issue. Determine which OSDs are marked as down and on what nodes they reside: On the nodes containing the flapping OSDs, troubleshoot and fix any networking problems. For details, see Troubleshooting networking issues . Alternatively, you can temporarily force Monitors to stop marking the OSDs as down and up by setting the noup and nodown flags: Important Using the noup and nodown flags does not fix the root cause of the problem but only prevents OSDs from flapping. To open a support ticket, see the Contacting Red Hat Support for service section for details. Important Flapping OSDs can be caused by MTU misconfiguration on Ceph OSD nodes, at the network switch level, or both. To resolve the issue, set MTU to a uniform size on all storage cluster nodes, including on the core and access network switches with a planned downtime. Do not tune osd heartbeat min size because changing this setting can hide issues within the network, and it will not solve actual network inconsistency. Additional Resources See the Ceph heartbeat section in the Red Hat Ceph Storage Architecture Guide for details. See the Slow requests or requests are blocked section in the Red Hat Ceph Storage Troubleshooting Guide . 5.2.9. Slow requests or requests are blocked The ceph-osd daemon is slow to respond to a request and the ceph health detail command returns an error message similar to the following one: In addition, the Ceph logs include an error message similar to the following ones: What This Means An OSD with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. The main causes of OSDs having slow requests are: Problems with the underlying hardware, such as disk drives, hosts, racks, or network switches Problems with the network. These problems are usually connected with flapping OSDs. See Flapping OSDs for details. System load The following table shows the types of slow requests. Use the dump_historic_ops administration socket command to determine the type of a slow request. For details about the administration socket, see the Using the Ceph Administration Socket section in the Administration Guide for Red Hat Ceph Storage 5. Slow request type Description waiting for rw locks The OSD is waiting to acquire a lock on a placement group for the operation. waiting for subops The OSD is waiting for replica OSDs to apply the operation to the journal. no flag points reached The OSD did not reach any major operation milestone. waiting for degraded object The OSDs have not replicated an object the specified number of times yet. To Troubleshoot This Problem Determine if the OSDs with slow or block requests share a common piece of hardware, for example, a disk drive, host, rack, or network switch. If the OSDs share a disk: Use the smartmontools utility to check the health of the disk or the logs to determine any errors on the disk. Note The smartmontools utility is included in the smartmontools package. Use the iostat utility to get the I/O wait report ( %iowai ) on the OSD disk to determine if the disk is under heavy load. Note The iostat utility is included in the sysstat package. If the OSDs share the node with another service: Check the RAM and CPU utilization Use the netstat utility to see the network statistics on the Network Interface Controllers (NICs) and troubleshoot any networking issues. See also Troubleshooting networking issues for further information. If the OSDs share a rack, check the network switch for the rack. For example, if you use jumbo frames, verify that the NIC in the path has jumbo frames set. If you are unable to determine a common piece of hardware shared by OSDs with slow requests, or to troubleshoot and fix hardware and networking problems, open a support ticket. See Contacting Red Hat support for service for details. Additional Resources See the Using the Ceph Administration Socket section in the Red Hat Ceph Storage Administration Guide for details. 5.3. Stopping and starting rebalancing When an OSD fails or you stop it, the CRUSH algorithm automatically starts the rebalancing process to redistribute data across the remaining OSDs. Rebalancing can take time and resources, therefore, consider stopping rebalancing during troubleshooting or maintaining OSDs. Note Placement groups within the stopped OSDs become degraded during troubleshooting and maintenance. Prerequisites Root-level access to the Ceph Monitor node. Procedure Log in to the Cephadm shell: Example Set the noout flag before stopping the OSD: Example When you finish troubleshooting or maintenance, unset the noout flag to start rebalancing: Example Additional Resources The Rebalancing and Recovery section in the Red Hat Ceph Storage Architecture Guide . 5.4. Replacing an OSD drive Ceph is designed for fault tolerance, which means that it can operate in a degraded state without losing data. Consequently, Ceph can operate even if a data storage drive fails. In the context of a failed drive, the degraded state means that the extra copies of the data stored on other OSDs will backfill automatically to other OSDs in the cluster. However, if this occurs, replace the failed OSD drive and recreate the OSD manually. When a drive fails, Ceph reports the OSD as down : Note Ceph can mark an OSD as down also as a consequence of networking or permissions problems. See Down OSDs for details. Modern servers typically deploy with hot-swappable drives so you can pull a failed drive and replace it with a new one without bringing down the node. The whole procedure includes these steps: Remove the OSD from the Ceph cluster. For details, see the Removing an OSD from the Ceph Cluster procedure. Replace the drive. For details, see Replacing the physical drive section. Add the OSD to the cluster. For details, see Adding an OSD to the Ceph Cluster procedure. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. At least one OSD is down . Removing an OSD from the Ceph Cluster Log into the Cephadm shell: Example Determine which OSD is down . Example Mark the OSD as out for the cluster to rebalance and copy its data to other OSDs. Syntax Example Note If the OSD is down , Ceph marks it as out automatically after 600 seconds when it does not receive any heartbeat packet from the OSD based on the mon_osd_down_out_interval parameter. When this happens, other OSDs with copies of the failed OSD data begin backfilling to ensure that the required number of copies exists within the cluster. While the cluster is backfilling, the cluster will be in a degraded state. Ensure that the failed OSD is backfilling. Example You should see the placement group states change from active+clean to active , some degraded objects, and finally active+clean when migration completes. Stop the OSD: Syntax Example Remove the OSD from the storage cluster: Syntax Example The OSD_ID is preserved. Replacing the physical drive See the documentation for the hardware node for details on replacing the physical drive. If the drive is hot-swappable, replace the failed drive with a new one. If the drive is not hot-swappable and the node contains multiple OSDs, you might have to shut down the whole node and replace the physical drive. Consider preventing the cluster from backfilling. See the Stopping and Starting Rebalancing chapter in the Red Hat Ceph Storage Troubleshooting Guide for details. When the drive appears under the /dev/ directory, make a note of the drive path. If you want to add the OSD manually, find the OSD drive and format the disk. Adding an OSD to the Ceph Cluster Once the new drive is inserted, you can use the following options to deploy the OSDs: The OSDs are deployed automatically by the Ceph Orchestrator if the --unmanaged parameter is not set. Example Deploy the OSDs on all the available devices with the unmanaged parameter set to true . Example Deploy the OSDs on specific devices and hosts. Example Ensure that the CRUSH hierarchy is accurate: Example Additional Resources See the Deploying Ceph OSDs on all available devices section in the Red Hat Ceph Storage Operations Guide . See the Deploying Ceph OSDs on specific devices and hosts section in the Red Hat Ceph Storage Operations Guide . See the Down OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . See the Red Hat Ceph Storage Installation Guide . 5.5. Increasing the PID count If you have a node containing more than 12 Ceph OSDs, the default maximum number of threads (PID count) can be insufficient, especially during recovery. As a consequence, some ceph-osd daemons can terminate and fail to start again. If this happens, increase the maximum possible number of threads allowed. Procedure To temporary increase the number: To permanently increase the number, update the /etc/sysctl.conf file as follows: 5.6. Deleting data from a full storage cluster Ceph automatically prevents any I/O operations on OSDs that reached the capacity specified by the mon_osd_full_ratio parameter and returns the full osds error message. This procedure shows how to delete unnecessary data to fix this error. Note The mon_osd_full_ratio parameter sets the value of the full_ratio parameter when creating a cluster. You cannot change the value of mon_osd_full_ratio afterward. To temporarily increase the full_ratio value, increase the set-full-ratio instead. Prerequisites Root-level access to the Ceph Monitor node. Procedure Log in to the Cephadm shell: Example Determine the current value of full_ratio , by default it is set to 0.95 : Temporarily increase the value of set-full-ratio to 0.97 : Important Red Hat strongly recommends to not set the set-full-ratio to a value higher than 0.97. Setting this parameter to a higher value makes the recovery process harder. As a consequence, you might not be able to recover full OSDs at all. Verify that you successfully set the parameter to 0.97 : Monitor the cluster state: As soon as the cluster changes its state from full to nearfull , delete any unnecessary data. Set the value of full_ratio back to 0.95 : Verify that you successfully set the parameter to 0.95 : Additional Resources Full OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . Nearfull OSDs section in the Red Hat Ceph Storage Troubleshooting Guide . | [
"HEALTH_ERR 1 full osds osd.3 is full at 95%",
"ceph df",
"health: HEALTH_WARN 3 backfillfull osd(s) Low space hindering backfill (add storage if this doesn't resolve itself): 32 pgs backfill_toofull",
"ceph df",
"ceph osd set-backfillfull-ratio VALUE",
"ceph osd set-backfillfull-ratio 0.92",
"HEALTH_WARN 1 nearfull osds osd.2 is near full at 85%",
"ceph osd df tree",
"cephadm shell ceph-volume lvm list",
"ceph df",
"HEALTH_WARN 1/3 in osds are down",
"ceph health detail HEALTH_WARN 1/3 in osds are down osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080",
"systemctl restart ceph- FSID @osd. OSD_ID",
"systemctl restart [email protected]",
"FAILED assert(0 == \"hit suicide timeout\")",
"dmesg",
"Caught signal (Segmentation fault)",
"wrongly marked me down heartbeat_check: no reply from osd.2 since back",
"ceph -w | grep osds 2022-05-05 06:27:20.810535 mon.0 [INF] osdmap e609: 9 osds: 8 up, 9 in 2022-05-05 06:27:24.120611 mon.0 [INF] osdmap e611: 9 osds: 7 up, 9 in 2022-05-05 06:27:25.975622 mon.0 [INF] HEALTH_WARN; 118 pgs stale; 2/9 in osds are down 2022-05-05 06:27:27.489790 mon.0 [INF] osdmap e614: 9 osds: 6 up, 9 in 2022-05-05 06:27:36.540000 mon.0 [INF] osdmap e616: 9 osds: 7 up, 9 in 2022-05-05 06:27:39.681913 mon.0 [INF] osdmap e618: 9 osds: 8 up, 9 in 2022-05-05 06:27:43.269401 mon.0 [INF] osdmap e620: 9 osds: 9 up, 9 in 2022-05-05 06:27:54.884426 mon.0 [INF] osdmap e622: 9 osds: 8 up, 9 in 2022-05-05 06:27:57.398706 mon.0 [INF] osdmap e624: 9 osds: 7 up, 9 in 2022-05-05 06:27:59.669841 mon.0 [INF] osdmap e625: 9 osds: 6 up, 9 in 2022-05-05 06:28:07.043677 mon.0 [INF] osdmap e628: 9 osds: 7 up, 9 in 2022-05-05 06:28:10.512331 mon.0 [INF] osdmap e630: 9 osds: 8 up, 9 in 2022-05-05 06:28:12.670923 mon.0 [INF] osdmap e631: 9 osds: 9 up, 9 in",
"2022-05-25 03:44:06.510583 osd.50 127.0.0.1:6801/149046 18992 : cluster [WRN] map e600547 wrongly marked me down",
"2022-05-25 19:00:08.906864 7fa2a0033700 -1 osd.254 609110 heartbeat_check: no reply from osd.2 since back 2021-07-25 19:00:07.444113 front 2021-07-25 18:59:48.311935 (cutoff 2021-07-25 18:59:48.906862)",
"ceph health detail HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests 30 ops are blocked > 268435 sec 1 ops are blocked > 268435 sec on osd.11 1 ops are blocked > 268435 sec on osd.18 28 ops are blocked > 268435 sec on osd.39 3 osds have slow requests",
"ceph osd tree | grep down",
"ceph osd set noup ceph osd set nodown",
"HEALTH_WARN 30 requests are blocked > 32 sec; 3 osds have slow requests 30 ops are blocked > 268435 sec 1 ops are blocked > 268435 sec on osd.11 1 ops are blocked > 268435 sec on osd.18 28 ops are blocked > 268435 sec on osd.39 3 osds have slow requests",
"2022-05-24 13:18:10.024659 osd.1 127.0.0.1:6812/3032 9 : cluster [WRN] 6 slow requests, 6 included below; oldest blocked for > 61.758455 secs",
"2022-05-25 03:44:06.510583 osd.50 [WRN] slow request 30.005692 seconds old, received at {date-time}: osd_op(client.4240.0:8 benchmark_data_ceph-1_39426_object7 [write 0~4194304] 0.69848840) v4 currently waiting for subops from [610]",
"cephadm shell",
"ceph osd set noout",
"ceph osd unset noout",
"HEALTH_WARN 1/3 in osds are down osd.0 is down since epoch 23, last address 192.168.106.220:6800/11080",
"cephadm shell",
"ceph osd tree | grep -i down ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF 0 hdd 0.00999 osd.0 down 1.00000 1.00000",
"ceph osd out OSD_ID .",
"ceph osd out osd.0 marked out osd.0.",
"ceph -w | grep backfill 2022-05-02 04:48:03.403872 mon.0 [INF] pgmap v10293282: 431 pgs: 1 active+undersized+degraded+remapped+backfilling, 28 active+undersized+degraded, 49 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 294 active+clean; 72347 MB data, 101302 MB used, 1624 GB / 1722 GB avail; 227 kB/s rd, 1358 B/s wr, 12 op/s; 10626/35917 objects degraded (29.585%); 6757/35917 objects misplaced (18.813%); 63500 kB/s, 15 objects/s recovering 2022-05-02 04:48:04.414397 mon.0 [INF] pgmap v10293283: 431 pgs: 2 active+undersized+degraded+remapped+backfilling, 75 active+undersized+degraded+remapped+wait_backfill, 59 stale+active+clean, 295 active+clean; 72347 MB data, 101398 MB used, 1623 GB / 1722 GB avail; 969 kB/s rd, 6778 B/s wr, 32 op/s; 10626/35917 objects degraded (29.585%); 10580/35917 objects misplaced (29.457%); 125 MB/s, 31 objects/s recovering 2022-05-02 04:48:00.380063 osd.1 [INF] 0.6f starting backfill to osd.0 from (0'0,0'0] MAX to 2521'166639 2022-05-02 04:48:00.380139 osd.1 [INF] 0.48 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'43079 2022-05-02 04:48:00.380260 osd.1 [INF] 0.d starting backfill to osd.0 from (0'0,0'0] MAX to 2513'136847 2022-05-02 04:48:00.380849 osd.1 [INF] 0.71 starting backfill to osd.0 from (0'0,0'0] MAX to 2331'28496 2022-05-02 04:48:00.381027 osd.1 [INF] 0.51 starting backfill to osd.0 from (0'0,0'0] MAX to 2513'87544",
"ceph orch daemon stop OSD_ID",
"ceph orch daemon stop osd.0",
"ceph orch osd rm OSD_ID --replace",
"ceph orch osd rm 0 --replace",
"ceph orch apply osd --all-available-devices",
"ceph orch apply osd --all-available-devices --unmanaged=true",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph osd tree",
"sysctl -w kernel.pid.max=4194303",
"kernel.pid.max = 4194303",
"cephadm shell",
"ceph osd dump | grep -i full full_ratio 0.95",
"ceph osd set-full-ratio 0.97",
"ceph osd dump | grep -i full full_ratio 0.97",
"ceph -w",
"ceph osd set-full-ratio 0.95",
"ceph osd dump | grep -i full full_ratio 0.95"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/troubleshooting_guide/troubleshooting-ceph-osds |
Chapter 9. Gathering the observability data from multiple clusters | Chapter 9. Gathering the observability data from multiple clusters For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1. Procedure Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates. An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {} A self-signed certificate. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io A CA issuer. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret The client and server certificates. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 2 issuerRef: name: ca-issuer 1 List of exact DNS names to be mapped to a solver in the server OpenTelemetry Collector instance. 2 List of exact DNS names to be mapped to a solver in the client OpenTelemetry Collector instance. Create a service account for the OpenTelemetry Collector instance. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespace resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the edge clusters. Example OpenTelemetryCollector custom resource for the edge clusters apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster. Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the central cluster. Example OpenTelemetryCollector custom resource for the central cluster apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: "deployment" ingress: type: route route: termination: "passthrough" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: "tempo-<simplest>-distributor:4317" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector receiver requires the certificates listed in the first step. 2 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, which in this example is "tempo-simplest-distributor:4317" and already created. | [
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/otel-gathering-observability-data-from-multiple-clusters |
Chapter 1. Preparing for bare metal cluster installation | Chapter 1. Preparing for bare metal cluster installation 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Getting started with OpenShift Virtualization Preparing your cluster for OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 1.3. NIC partitioning for SR-IOV devices (Technology Preview) OpenShift Container Platform can be deployed on a server with a dual port network interface card (NIC). You can partition a single, high-speed dual port NIC into multiple virtual functions (VFs) and enable SR-IOV. This feature supports the use of bonds for high availability with the Link Aggregation Control Protocol (LACP). Note Only one LACP can be declared by physical NIC. An OpenShift Container Platform cluster can be deployed on a bond interface with 2 VFs on 2 physical functions (PFs) using the following methods: Agent-based installer Note The minimum required version of nmstate is: 1.4.2-4 for RHEL 8 versions 2.2.7 for RHEL 9 versions Installer-provisioned infrastructure installation User-provisioned infrastructure installation Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Additional resources Example: Bonds and SR-IOV dual-nic node network configuration Optional: Configuring host network interfaces for dual port NIC Bonding multiple SR-IOV network interfaces to a dual port NIC interface 1.4. Choosing a method to install OpenShift Container Platform on bare metal The OpenShift Container Platform installation program offers four methods for deploying a cluster: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is the recommended approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the agent-based installer for air-gapped or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the agent-based installer first. Configuration is done with a commandline interface. This approach is ideal for air-gapped or restricted networks. Automated : You can deploy a cluster on installer-provisioned infrastructure and the cluster it maintains. The installer uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters with both connected or air-gapped or restricted networks. Full control : You can deploy a cluster on infrastructure that you prepare and maintain , which provides maximum customizability. You can deploy clusters with both connected or air-gapped or restricted networks. The clusters have the following characteristics: Highly available infrastructure with no single points of failure is available by default. Administrators maintain control over what updates are applied and when. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.4.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on bare metal infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing an installer-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal by using installer provisioning. 1.4.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on bare metal infrastructure that you provision, by using one of the following methods: Installing a user-provisioned cluster on bare metal : You can install OpenShift Container Platform on bare metal infrastructure that you provision. For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. Installing a user-provisioned bare metal cluster with network customizations : You can install a bare metal cluster on user-provisioned infrastructure with network-customizations. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Most of the network customizations must be applied at the installation stage. Installing a user-provisioned bare metal cluster on a restricted network : You can install a user-provisioned bare metal cluster on a restricted or disconnected network by using a mirror registry. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_bare_metal/preparing-to-install-on-bare-metal |
Chapter 3. Stress testing real-time systems with stress-ng | Chapter 3. Stress testing real-time systems with stress-ng The stress-ng tool measures the system's capability to maintain a good level of efficiency under unfavorable conditions. The stress-ng tool is a stress workload generator to load and stress all kernel interfaces. It includes a wide range of stress mechanisms known as stressors. Stress testing makes a machine work hard and trip hardware issues such as thermal overruns and operating system bugs that occur when a system is being overworked. There are over 270 different tests. These include CPU specific tests that exercise floating point, integer, bit manipulation, control flow, and virtual memory tests. Note Use the stress-ng tool with caution as some of the tests can impact the system's thermal zone trip points on a poorly designed hardware. This can impact system performance and cause excessive system thrashing which can be difficult to stop. 3.1. Testing CPU floating point units and processor data cache A floating point unit is the functional part of the processor that performs floating point arithmetic operations. Floating point units handle mathematical operations and make floating numbers or decimal calculations simpler. Using the --matrix-method option, you can stress test the CPU floating point operations and processor data cache. Prerequisites You have root permissions on the systems Procedure To test the floating point on one CPU for 60 seconds, use the --matrix option: To run multiple stressors on more than one CPUs for 60 seconds, use the --times or -t option: # stress-ng --matrix 0 -t 1m stress-ng --matrix 0 -t 1m --times stress-ng: info: [16783] dispatching hogs: 4 matrix stress-ng: info: [16783] successful run completed in 60.00s (1 min, 0.00 secs) stress-ng: info: [16783] for a 60.00s run time: stress-ng: info: [16783] 240.00s available CPU time stress-ng: info: [16783] 205.21s user time ( 85.50%) stress-ng: info: [16783] 0.32s system time ( 0.13%) stress-ng: info: [16783] 205.53s total time ( 85.64%) stress-ng: info: [16783] load average: 3.20 1.25 1.40 The special mode with 0 stressors, query the available CPUs to run, removing the need to specify the CPU number. The total CPU time required is 4 x 60 seconds (240 seconds), of which 0.13% is in the kernel, 85.50% is in user time, and stress-ng runs 85.64% of all the CPUs. To test message passing between processes using a POSIX message queue, use the -mq option: # stress-ng --mq 0 -t 30s --times --perf The mq option configures a specific number of processes to force context switches using the POSIX message queue. This stress test aims for low data cache misses. 3.2. Testing CPU with multiple stress mechanisms The stress-ng tool runs multiple stress tests. In the default mode, it runs the specified stressor mechanisms in parallel. Prerequisites You have root privileges on the systems Procedure Run multiple instances of CPU stressors as follows: # stress-ng --cpu 2 --matrix 1 --mq 3 -t 5m In the example, stress-ng runs two instances of the CPU stressors, one instance of the matrix stressor and three instances of the message queue stressor to test for five minutes. To run all stress tests in parallel, use the -all option: In this example, stress-ng runs two instances of all stress tests in parallel. To run each different stressor in a specific sequence, use the --seq option. In this example, stress-ng runs all the stressors one by one for 20 minutes, with the number of instances of each stressor matching the number of online CPUs. To exclude specific stressors from a test run, use the -x option: # stress-ng --seq 1 -x numa,matrix,hdd In this example, stress-ng runs all stressors, one instance of each, excluding numa , hdd and key stressors mechanisms. 3.3. Measuring CPU heat generation To measure the CPU heat generation, the specified stressors generate high temperatures for a short time duration to test the system's cooling reliability and stability under maximum heat generation. Using the --matrix-size option, you can measure CPU temperatures in degrees Celsius over a short time duration. Prerequisites You have root privileges on the system. Procedure To test the CPU behavior at high temperatures for a specified time duration, run the following command: # stress-ng --matrix 0 --matrix-size 64 --tz -t 60 stress-ng: info: [18351] dispatching hogs: 4 matrix stress-ng: info: [18351] successful run completed in 60.00s (1 min, 0.00 secs) stress-ng: info: [18351] matrix: stress-ng: info: [18351] x86_pkg_temp 88.00 degC stress-ng: info: [18351] acpitz 87.00 degC In this example, the stress-ng configures the processor package thermal zone to reach 88 degrees Celsius over the duration of 60 seconds. Optional: To print a report at the end of a run, use the --tz option: # stress-ng --cpu 0 --tz -t 60 stress-ng: info: [18065] dispatching hogs: 4 cpu stress-ng: info: [18065] successful run completed in 60.07s (1 min, 0.07 secs) stress-ng: info: [18065] cpu: stress-ng: info: [18065] x86_pkg_temp 88.75 degC stress-ng: info: [18065] acpitz 88.38 degC 3.4. Measuring test outcomes with bogo operations The stress-ng tool can measure a stress test throughput by measuring the bogo operations per second. The size of a bogo operation depends on the stressor being run. The test outcomes are not precise, but they provide a rough estimate of the performance. You must not use this measurement as an accurate benchmark metric. These estimates help to understand the system performance changes on different kernel versions or different compiler versions used to build stress-ng . Use the --metrics-brief option to display the total available bogo operations and the matrix stressor performance on your machine. Prerequisites You have root privileges on the system. Procedure To measure test outcomes with bogo operations, use with the --metrics-brief option: # stress-ng --matrix 0 -t 60s --metrics-brief stress-ng: info: [17579] dispatching hogs: 4 matrix stress-ng: info: [17579] successful run completed in 60.01s (1 min, 0.01 secs) stress-ng: info: [17579] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s stress-ng: info: [17579] (secs) (secs) (secs) (real time) (usr+sys time) stress-ng: info: [17579] matrix 349322 60.00 203.23 0.19 5822.03 1717.25 The --metrics-brief option displays the test outcomes and the total real-time bogo operations run by the matrix stressor for 60 seconds. 3.5. Generating a virtual memory pressure When under memory pressure, the kernel starts writing pages out to swap. You can stress the virtual memory by using the --page-in option to force non-resident pages to swap back into the virtual memory. This causes the virtual machine to be heavily exercised. Using the --page-in option, you can enable this mode for the bigheap , mmap and virtual machine ( vm ) stressors. The --page-in option, touch allocated pages that are not in core, forcing them to page in. Prerequisites You have root privileges on the system. Procedure To stress test a virtual memory, use the --page-in option: In this example, stress-ng tests memory pressure on a system with 4GB of memory, which is less than the allocated buffer sizes, 2 x 2GB of vm stressor and 2 x 2GB of mmap stressor with --page-in enabled. 3.6. Testing large interrupts loads on a device Running timers at high frequency can generate a large interrupt load. The -timer stressor with an appropriately selected timer frequency can force many interrupts per second. Prerequisites You have root permissions on the system. Procedure To generate an interrupt load, use the --timer option: In this example, stress-ng tests 32 instances at 1MHz. 3.7. Generating major page faults in a program With stress-ng , you can test and analyze the page fault rate by generating major page faults in a page that are not loaded in the memory. On new kernel versions, the userfaultfd mechanism notifies the fault finding threads about the page faults in the virtual memory layout of a process. Prerequisites You have root permissions on the system. Procedure To generate major page faults on early kernel versions, use: # stress-ng --fault 0 --perf -t 1m To generate major page faults on new kernel versions, use: # stress-ng --userfaultfd 0 --perf -t 1m 3.8. Viewing CPU stress test mechanisms The CPU stress test contains methods to exercise a CPU. You can print an output to view all methods using the which option. If you do not specify the test method, by default, the stressor checks all the stressors in a round-robin fashion to test the CPU with each stressor. Prerequisites You have root permissions on the system. Procedure Print all available stressor mechanisms, use the which option: Specify a specific CPU stress method using the --cpu-method option: 3.9. Using the verify mode The verify mode validates the results when a test is active. It sanity checks the memory contents from a test run and reports any unexpected failures. All stressors do not have the verify mode and enabling one will reduce the bogo operation statistics because of the extra verification step being run in this mode. Prerequisites You have root permissions on the system. Procedure To validate a stress test results, use the --verify option: # stress-ng --vm 1 --vm-bytes 2G --verify -v In this example, stress-ng prints the output for an exhaustive memory check on a virtually mapped memory using the vm stressor configured with --verify mode. It sanity checks the read and write results on the memory. | [
"stress-ng --matrix 1 -t 1m",
"stress-ng --matrix 0 -t 1m stress-ng --matrix 0 -t 1m --times stress-ng: info: [16783] dispatching hogs: 4 matrix stress-ng: info: [16783] successful run completed in 60.00s (1 min, 0.00 secs) stress-ng: info: [16783] for a 60.00s run time: stress-ng: info: [16783] 240.00s available CPU time stress-ng: info: [16783] 205.21s user time ( 85.50%) stress-ng: info: [16783] 0.32s system time ( 0.13%) stress-ng: info: [16783] 205.53s total time ( 85.64%) stress-ng: info: [16783] load average: 3.20 1.25 1.40",
"stress-ng --mq 0 -t 30s --times --perf",
"stress-ng --cpu 2 --matrix 1 --mq 3 -t 5m",
"stress-ng --all 2",
"stress-ng --seq 4 -t 20",
"stress-ng --seq 1 -x numa,matrix,hdd",
"stress-ng --matrix 0 --matrix-size 64 --tz -t 60 stress-ng: info: [18351] dispatching hogs: 4 matrix stress-ng: info: [18351] successful run completed in 60.00s (1 min, 0.00 secs) stress-ng: info: [18351] matrix: stress-ng: info: [18351] x86_pkg_temp 88.00 degC stress-ng: info: [18351] acpitz 87.00 degC",
"stress-ng --cpu 0 --tz -t 60 stress-ng: info: [18065] dispatching hogs: 4 cpu stress-ng: info: [18065] successful run completed in 60.07s (1 min, 0.07 secs) stress-ng: info: [18065] cpu: stress-ng: info: [18065] x86_pkg_temp 88.75 degC stress-ng: info: [18065] acpitz 88.38 degC",
"stress-ng --matrix 0 -t 60s --metrics-brief stress-ng: info: [17579] dispatching hogs: 4 matrix stress-ng: info: [17579] successful run completed in 60.01s (1 min, 0.01 secs) stress-ng: info: [17579] stressor bogo ops real time usr time sys time bogo ops/s bogo ops/s stress-ng: info: [17579] (secs) (secs) (secs) (real time) (usr+sys time) stress-ng: info: [17579] matrix 349322 60.00 203.23 0.19 5822.03 1717.25",
"stress-ng --vm 2 --vm-bytes 2G --mmap 2 --mmap-bytes 2G --page-in",
"stress-ng --timer 32 --timer-freq 1000000",
"stress-ng --fault 0 --perf -t 1m",
"stress-ng --userfaultfd 0 --perf -t 1m",
"stress-ng --cpu-method which cpu-method must be one of: all ackermann bitops callfunc cdouble cfloat clongdouble correlate crc16 decimal32 decimal64 decimal128 dither djb2a double euler explog fft fibonacci float fnv1a gamma gcd gray hamming hanoi hyperbolic idct int128 int64 int32",
"stress-ng --cpu 1 --cpu-method fft -t 1m",
"stress-ng --vm 1 --vm-bytes 2G --verify -v"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/understanding_rhel_for_real_time/assembly_stress-testing-real-time-systems-with-stress-ng_understanding-rhel-for-real-time-core-concepts |
Chapter 5. Building functions | Chapter 5. Building functions To run a function, you first must build the function project. This happens automatically when using the kn func run command, but you can also build a function without running it. 5.1. Building a function Before you can run a function, you must build the function project. If you are using the kn func run command, the function is built automatically. However, you can use the kn func build command to build a function without running it, which can be useful for advanced users or debugging scenarios. The kn func build command creates an OCI container image that can be run locally on your computer or on an OpenShift Container Platform cluster. This command uses the function project name and the image registry name to construct a fully qualified image name for your function. 5.1.1. Image container types By default, kn func build creates a container image by using Red Hat Source-to-Image (S2I) technology. Example build command using Red Hat Source-to-Image (S2I) USD kn func build 5.1.2. Image registry types The OpenShift Container Registry is used by default as the image registry for storing function images. Example build command using OpenShift Container Registry USD kn func build Example output Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest You can override using OpenShift Container Registry as the default image registry by using the --registry flag: Example build command overriding OpenShift Container Registry to use quay.io USD kn func build --registry quay.io/username Example output Building function image Function image has been built, image: quay.io/username/example-function:latest 5.1.3. Push flag You can add the --push flag to a kn func build command to automatically push the function image after it is successfully built: Example build command using OpenShift Container Registry USD kn func build --push 5.1.4. Help command You can use the help command to learn more about kn func build command options: Build help command USD kn func help build | [
"kn func build",
"kn func build",
"Building function image Function image has been built, image: registry.redhat.io/example/example-function:latest",
"kn func build --registry quay.io/username",
"Building function image Function image has been built, image: quay.io/username/example-function:latest",
"kn func build --push",
"kn func help build"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/functions/serverless-functions-building |
Chapter 10. IngressClass [networking.k8s.io/v1] | Chapter 10. IngressClass [networking.k8s.io/v1] Description IngressClass represents the class of the Ingress, referenced by the Ingress Spec. The ingressclass.kubernetes.io/is-default-class annotation can be used to indicate that an IngressClass should be considered default. When a single IngressClass resource has this annotation set to true, new Ingress resources without a class specified will be assigned this default class. Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IngressClassSpec provides information about the class of an Ingress. 10.1.1. .spec Description IngressClassSpec provides information about the class of an Ingress. Type object Property Type Description controller string Controller refers to the name of the controller that should handle this class. This allows for different "flavors" that are controlled by the same controller. For example, you may have different Parameters for the same implementing controller. This should be specified as a domain-prefixed path no more than 250 characters in length, e.g. "acme.io/ingress-controller". This field is immutable. parameters object IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. 10.1.2. .spec.parameters Description IngressClassParametersReference identifies an API object. This can be used to specify a cluster or namespace-scoped resource. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced. name string Name is the name of resource being referenced. namespace string Namespace is the namespace of the resource being referenced. This field is required when scope is set to "Namespace" and must be unset when scope is set to "Cluster". scope string Scope represents if this refers to a cluster or namespace scoped resource. This may be set to "Cluster" (default) or "Namespace". 10.2. API endpoints The following API endpoints are available: /apis/networking.k8s.io/v1/ingressclasses DELETE : delete collection of IngressClass GET : list or watch objects of kind IngressClass POST : create an IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses GET : watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/networking.k8s.io/v1/ingressclasses/{name} DELETE : delete an IngressClass GET : read the specified IngressClass PATCH : partially update the specified IngressClass PUT : replace the specified IngressClass /apis/networking.k8s.io/v1/watch/ingressclasses/{name} GET : watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 10.2.1. /apis/networking.k8s.io/v1/ingressclasses Table 10.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of IngressClass Table 10.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 10.3. Body parameters Parameter Type Description body DeleteOptions schema Table 10.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind IngressClass Table 10.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.6. HTTP responses HTTP code Reponse body 200 - OK IngressClassList schema 401 - Unauthorized Empty HTTP method POST Description create an IngressClass Table 10.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.8. Body parameters Parameter Type Description body IngressClass schema Table 10.9. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 202 - Accepted IngressClass schema 401 - Unauthorized Empty 10.2.2. /apis/networking.k8s.io/v1/watch/ingressclasses Table 10.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of IngressClass. deprecated: use the 'watch' parameter with a list operation instead. Table 10.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 10.2.3. /apis/networking.k8s.io/v1/ingressclasses/{name} Table 10.12. Global path parameters Parameter Type Description name string name of the IngressClass Table 10.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an IngressClass Table 10.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.15. Body parameters Parameter Type Description body DeleteOptions schema Table 10.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IngressClass Table 10.17. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IngressClass Table 10.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.19. Body parameters Parameter Type Description body Patch schema Table 10.20. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IngressClass Table 10.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.22. Body parameters Parameter Type Description body IngressClass schema Table 10.23. HTTP responses HTTP code Reponse body 200 - OK IngressClass schema 201 - Created IngressClass schema 401 - Unauthorized Empty 10.2.4. /apis/networking.k8s.io/v1/watch/ingressclasses/{name} Table 10.24. Global path parameters Parameter Type Description name string name of the IngressClass Table 10.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind IngressClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 10.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_apis/ingressclass-networking-k8s-io-v1 |
Chapter 28. Associating secondary interfaces metrics to network attachments | Chapter 28. Associating secondary interfaces metrics to network attachments 28.1. Extending secondary network metrics for monitoring Secondary devices, or interfaces, are used for different purposes. It is important to have a way to classify them to be able to aggregate the metrics for secondary devices with the same classification. Exposed metrics contain the interface but do not specify where the interface originates. This is workable when there are no additional interfaces. However, if secondary interfaces are added, it can be difficult to use the metrics since it is hard to identify interfaces using only interface names. When adding secondary interfaces, their names depend on the order in which they are added, and different secondary interfaces might belong to different networks and can be used for different purposes. With pod_network_name_info it is possible to extend the current metrics with additional information that identifies the interface type. In this way, it is possible to aggregate the metrics and to add specific alarms to specific interface types. The network type is generated using the name of the related NetworkAttachmentDefinition , that in turn is used to differentiate different classes of secondary networks. For example, different interfaces belonging to different networks or using different CNIs use different network attachment definition names. 28.1.1. Network Metrics Daemon The Network Metrics Daemon is a daemon component that collects and publishes network related metrics. The kubelet is already publishing network related metrics you can observe. These metrics are: container_network_receive_bytes_total container_network_receive_errors_total container_network_receive_packets_total container_network_receive_packets_dropped_total container_network_transmit_bytes_total container_network_transmit_errors_total container_network_transmit_packets_total container_network_transmit_packets_dropped_total The labels in these metrics contain, among others: Pod name Pod namespace Interface name (such as eth0 ) These metrics work well until new interfaces are added to the pod, for example via Multus , as it is not clear what the interface names refer to. The interface label refers to the interface name, but it is not clear what that interface is meant for. In case of many different interfaces, it would be impossible to understand what network the metrics you are monitoring refer to. This is addressed by introducing the new pod_network_name_info described in the following section. 28.1.2. Metrics with network name This daemonset publishes a pod_network_name_info gauge metric, with a fixed value of 0 : pod_network_name_info{interface="net0",namespace="namespacename",network_name="nadnamespace/firstNAD",pod="podname"} 0 The network name label is produced using the annotation added by Multus. It is the concatenation of the namespace the network attachment definition belongs to, plus the name of the network attachment definition. The new metric alone does not provide much value, but combined with the network related container_network_* metrics, it offers better support for monitoring secondary networks. Using a promql query like the following ones, it is possible to get a new metric containing the value and the network name retrieved from the k8s.v1.cni.cncf.io/network-status annotation: (container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) | [
"pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0",
"(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/associating-secondary-interfaces-metrics-to-network-attachments |
Chapter 3. CSINode [storage.k8s.io/v1] | Chapter 3. CSINode [storage.k8s.io/v1] Description CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata.name must be the Kubernetes node name. spec object CSINodeSpec holds information about the specification of all CSI drivers installed on a node 3.1.1. .spec Description CSINodeSpec holds information about the specification of all CSI drivers installed on a node Type object Required drivers Property Type Description drivers array drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty. drivers[] object CSINodeDriver holds information about the specification of one CSI driver installed on a node 3.1.2. .spec.drivers Description drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty. Type array 3.1.3. .spec.drivers[] Description CSINodeDriver holds information about the specification of one CSI driver installed on a node Type object Required name nodeID Property Type Description allocatable object VolumeNodeResources is a set of resource limits for scheduling of volumes. name string This is the name of the CSI driver that this object refers to. This MUST be the same name returned by the CSI GetPluginName() call for that driver. nodeID string nodeID of the node from the driver point of view. This field enables Kubernetes to communicate with storage systems that do not share the same nomenclature for nodes. For example, Kubernetes may refer to a given node as "node1", but the storage system may refer to the same node as "nodeA". When Kubernetes issues a command to the storage system to attach a volume to a specific node, it can use this field to refer to the node name using the ID that the storage system will understand, e.g. "nodeA" instead of "node1". This field is required. topologyKeys array (string) topologyKeys is the list of keys supported by the driver. When a driver is initialized on a cluster, it provides a set of topology keys that it understands (e.g. "company.com/zone", "company.com/region"). When a driver is initialized on a node, it provides the same topology keys along with values. Kubelet will expose these topology keys as labels on its own node object. When Kubernetes does topology aware provisioning, it can use this list to determine which labels it should retrieve from the node object and pass back to the driver. It is possible for different nodes to use different topology keys. This can be empty if driver does not support topology. 3.1.4. .spec.drivers[].allocatable Description VolumeNodeResources is a set of resource limits for scheduling of volumes. Type object Property Type Description count integer Maximum number of unique volumes managed by the CSI driver that can be used on a node. A volume that is both attached and mounted on a node is considered to be used once, not twice. The same rule applies for a unique volume that is shared among multiple pods on the same node. If this field is not specified, then the supported number of volumes on this node is unbounded. 3.2. API endpoints The following API endpoints are available: /apis/storage.k8s.io/v1/csinodes DELETE : delete collection of CSINode GET : list or watch objects of kind CSINode POST : create a CSINode /apis/storage.k8s.io/v1/watch/csinodes GET : watch individual changes to a list of CSINode. deprecated: use the 'watch' parameter with a list operation instead. /apis/storage.k8s.io/v1/csinodes/{name} DELETE : delete a CSINode GET : read the specified CSINode PATCH : partially update the specified CSINode PUT : replace the specified CSINode /apis/storage.k8s.io/v1/watch/csinodes/{name} GET : watch changes to an object of kind CSINode. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/storage.k8s.io/v1/csinodes Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CSINode Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CSINode Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CSINodeList schema 401 - Unauthorized Empty HTTP method POST Description create a CSINode Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body CSINode schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 202 - Accepted CSINode schema 401 - Unauthorized Empty 3.2.2. /apis/storage.k8s.io/v1/watch/csinodes Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CSINode. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/storage.k8s.io/v1/csinodes/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the CSINode Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CSINode Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 202 - Accepted CSINode schema 401 - Unauthorized Empty HTTP method GET Description read the specified CSINode Table 3.17. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CSINode Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CSINode Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body CSINode schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK CSINode schema 201 - Created CSINode schema 401 - Unauthorized Empty 3.2.4. /apis/storage.k8s.io/v1/watch/csinodes/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the CSINode Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CSINode. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage_apis/csinode-storage-k8s-io-v1 |
Chapter 17. Errata Management with Red Hat Satellite | Chapter 17. Errata Management with Red Hat Satellite Red Hat Virtualization can be configured to view errata from Red Hat Satellite in the Red Hat Virtualization Manager. This enables the administrator to receive updates about available errata, and their importance, for hosts, virtual machines, and the Manager once they have been associated with a Red Hat Satellite provider. Administrators can then choose to apply the updates by running an update on the required host, virtual machine, or on the Manager. For more information about Red Hat Satellite see the Red Hat Satellite Documentation . Red Hat Virtualization 4.3 supports errata management with Red Hat Satellite 6.5. Important The Manager, hosts, and virtual machines are identified in the Satellite server by their FQDN. This ensures that external content host IDs do not need to be maintained in Red Hat Virtualization. The Satellite account used to manage the Manager, hosts and virtual machines must have Administrator permissions and a default organization set. Configuring Red Hat Virtualization Errata To associate a Manager, host, and virtual machine with a Red Hat Satellite provider first the Manager must be associated with a provider. Then the host is associated with the same provider and configured. Finally, the virtual machine is associated with the same provider and configured. Associate the Manager by adding the required Satellite server as an external provider. See Section 14.2.1, "Adding a Red Hat Satellite Instance for Host Provisioning" for more information. Note The Manager must be registered to the Satellite server as a content host and have the katello-agent package installed. For more information on how to configure a host registration see Registering Hosts in the Red Hat Satellite document Managing Hosts . Optionally, configure the required hosts to display available errata. See Section 10.5.3, "Configuring Satellite Errata Management for a Host" for more information. Optionally, configure the required virtual machines to display available errata. The associated host needs to be configured prior to configuring the required virtual machines. See Configuring Red Hat Satellite Errata Management for a Virtual Machine in the Virtual Machine Management Guide for more information. Viewing Red Hat Virtualization Manager Errata Click Administration Errata . Select the Security , Bugs , or Enhancements check boxes to view only those errata types. For more information on viewing available errata for hosts see Section 10.5.21, "Viewing Host Errata" and for virtual machines see Viewing Red Hat Satellite Errata for a Virtual Machine in the Virtual Machine Management Guide . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/chap-errata_management_with_satellite |
probe::netdev.change_mtu | probe::netdev.change_mtu Name probe::netdev.change_mtu - Called when the netdev MTU is changed Synopsis netdev.change_mtu Values old_mtu The current MTU new_mtu The new MTU dev_name The device that will have the MTU changed | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netdev-change-mtu |
Chapter 5. Important changes to external kernel parameters | Chapter 5. Important changes to external kernel parameters This chapter provides system administrators with a summary of significant changes in the kernel distributed with Red Hat Enterprise Linux 9.4. These changes could include, for example, added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. New kernel parameters accept_memory= [MM] Values: lazy (default) By default, unaccepted memory is accepted lazily to avoid prolonged boot times. The lazy option adds some runtime overhead until all memory is eventually accepted. In most cases, the overhead is negligible. eager For some workloads or for debugging purposes, you can use accept_memory=eager to accept all memory at once during boot. arm64.nomops [ARM64] Unconditionally disable Memory Copy and Memory Set instructions support. cgroup_favordynmods= [KNL] Enable or disable favordynmods . Values: true false Defaults to the value of CONFIG_CGROUP_FAVOR_DYNMODS . early_page_ext [KNL] Enforces page_ext initialization to earlier stages to cover more early boot allocations. Note that as side effect, some optimizations might be disabled to achieve that: for example, parallelized memory initialization is disabled. Therefore, the boot process might take longer, especially on systems with much memory. Available with CONFIG_PAGE_EXTENSION=y . fw_devlink.sync_state= [KNL] When all devices that could probe have finished probing, this parameter controls what to do with devices that have not yet received their sync_state() calls. Values: strict (default) Continue waiting on consumers to probe successfully. timeout Give up waiting on consumers and call sync_state() on any devices that have not yet received their sync_state() calls after deferred_probe_timeout has expired or by late_initcall() if CONFIG_MODULES is false . ia32_emulation= [X86-64] Values: true Allows loading 32-bit programs and executing 32-bit syscalls, essentially overriding IA32_EMULATION_DEFAULT_DISABLED at boot time. false Unconditionally disables IA32 emulation. kunit.enable= [KUNIT] Enable executing KUnit tests. Requires CONFIG_KUNIT to be set to be fully enabled. You can override the default value using KUNIT_DEFAULT_ENABLED . The default is 1 (enabled). mtrr=debug [X86] Enable printing debug information related to MTRR registers at boot time. rcupdate.rcu_cpu_stall_cputime= [KNL] Provide statistics on the CPU time and count of interrupts and tasks during the sampling period. For multiple continuous RCU stalls, all sampling periods begin at half of the first RCU stall timeout. rcupdate.rcu_exp_stall_task_details= [KNL] Print stack dumps of any tasks blocking the current expedited RCU grace period during an expedited RCU CPU stall warning. spec_rstack_overflow= [X86] Control RAS overflow mitigation on AMD Zen CPUs. Values: off Disable mitigation microcode Enable only microcode mitigation. safe-ret (default) Enable software-only safe RET mitigation. ibpb Enable mitigation by issuing IBPB on kernel entry. ibpb-vmexit Issue IBPB only on VMEXIT. This mitigation is specific to cloud environments. workqueue.unbound_cpus= [KNL,SMP] Specify to constrain one or some CPUs to use in unbound workqueues. Value: A list of CPUs. By default, all online CPUs are available for unbound workqueues. Updated kernel parameters amd_iommu= [HW, X86-64] Pass parameters to the AMD IOMMU driver in the system. Values: fullflush Deprecated, equivalent to iommu.strict=1 . off Do not initialize any AMD IOMMU found in the system. force_isolation Force device isolation for all devices. The IOMMU driver is not allowed anymore to lift isolation requirements as needed. This option does not override iommu=pt . force_enable Force enable the IOMMU on platforms known to be buggy with IOMMU enabled. Use this option with care. New: pgtbl_v1 (default) Use version 1 page table for DMA-API. New: pgtbl_v2 Use version 2 page table for DMA-API. New: irtcachedis Disable Interrupt Remapping Table (IRT) caching. nosmt [KNL, PPC , S390] Disable symmetric multithreading (SMT). Equivalent to smt=1 . [KNL, X86, PPC ] Disable symmetric multithreading (SMT). nosmt=force Force disable SMT. Cannot be undone using the sysfs control file. page_reporting.page_reporting_order= [KNL] Minimal page reporting order. Value: integer. Adjust the minimal page reporting order. New: The page reporting is disabled when it exceeds MAX_ORDER . tsc= Disable clocksource stability checks for TSC. Values: [x86] reliable Mark tsc clocksource as reliable. This disables clocksource verification at runtime, and the stability checks done at bootup. Used to enable high-resolution timer mode on older hardware, and in virtualized environment. [x86] noirqtime Do not use TSC to do irq accounting. Used to run time disable IRQ_TIME_ACCOUNTING on any platforms where RDTSC is slow and this accounting might add overhead. [x86] unstable Mark the TSC clocksource as unstable. This marks the TSC unconditionally unstable at bootup and avoids any further wobbles once the TSC watchdog notices. [x86] nowatchdog Disable clocksource watchdog. Used in situations with strict latency requirements, where interruptions from clocksource watchdog are not acceptable. [x86] recalibrate Force recalibration against a HW timer (HPET or PM timer) on systems whose TSC frequency was obtained from HW or FW using either an MSR or CPUID(0x15). Warn if the difference is more than 500 ppm. New: [x86] watchdog Use TSC as the watchdog clocksource with which to check other HW timers (HPET or PM timer), but only on systems where TSC has been deemed trustworthy. An earlier tsc=nowatchdog suppresses this. A later tsc=nowatchdog overrides this. A console message flags any such suppression or overriding. usbcore.authorized_default= [USB] Default USB device authorization. Values: New: -1 (default) Authorized (same as 1). 0 Not authorized. 1 Authorized. 2 Authorized if the device connects to an internal port. Removed kernel parameters cpu0_hotplug sysfs.deprecated New sysctl parameters io_uring_group Values: 1 A process must either be privileged ( CAP_SYS_ADMIN ) or be in the io_uring_group group to create an io_uring instance. -1 (default) Only processes with the CAP_SYS_ADMIN capability can create io_uring instances. numa_balancing_promote_rate_limit_MBps Too high promotion or demotion throughput between different memory types might hurt application latency. You can use this parameter to rate-limit the promotion throughput. The per-node maximum promotion throughput in MB/s is limited to be no more than the set value. A rule of thumb is to set this to less than 1/10 of the PMEM node write bandwidth. Updated sysctl parameters io_uring_disabled Prevents all processes from creating new io_uring instances. Enabling this shrinks the attack surface of the kernel. Values: New: 0 All processes can create io_uring instances as normal. New: 1 io_uring creation is disabled for unprivileged processes not in the io_uring_group group. io_uring_setup() fails with -EPERM . Existing io_uring instances can still be used. See the documentation for io_uring_group for more information. New: 2 (default) io_uring creation is disabled for all processes. io_uring_setup() always fails with -EPERM . Existing io_uring instances can still be used. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.4_release_notes/kernel_parameters_changes |
function::proc_mem_txt_pid | function::proc_mem_txt_pid Name function::proc_mem_txt_pid - Program text (code) size in pages Synopsis Arguments pid The pid of process to examine Description Returns the given process text (code) size in pages, or zero when the process doesn't exist or the number of pages couldn't be retrieved. | [
"function proc_mem_txt_pid:long(pid:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-proc-mem-txt-pid |
9. Networking | 9. Networking 9.1. Multiqueue Networking Every data packet transferred over a network device represents processing which must be completed by a CPU. The low-level network implementation in Red Hat Enterprise Linux 6 allows network device drivers to divide network packet processing across multiple queues. Dividing these processes allows a system to better utilize the multiple processors and CPU cores present on modern systems. 9.2. Internet Protocol version 6 (IPv6) The -generation Internet Protocol version 6 (IPv6) specification is designed as the successor to Internet Protocol version 4 (IPv4). IPv6 specifies a wide range of improvements over IPv4, including: expanded addressing capabilites, flow labeling and simplified header formats. 9.2.1. Optimistic Duplicate Address Detection Duplicate Address Detection (DAD) is a feature of the Neighbor Discovery Protocol portion of IPv6. Specifically, DAD is tasked with checking if an IPv6 address is already being used. Red Hat Enterprise Linux features Optimistic Duplicate Address Detection, a speed optimization of DAD. 9.2.2. Intra-Site Automatic Tunnel Addressing Protocol Red Hat Enterprise Linux 6 features support for the Intra-Site Automatic Tunnel Addressing Protocol (ISATAP). ISATAP is a protocol designed to assist in the transition from IPv4 to IPv6, by providing a mechanism to connect IPv6 routers and hosts over IPv4 network infrastructure. 9.3. Netlabel Netlabel is a new kernel-level feature in Red Hat Enterprise Linux 6 that provides network packet labeling services for Linux Security Modules (LSMs). Labeling data packets using netlabel allows an LSM to better enforce security requirements on incoming network packets. 9.4. Generic Receive Offload The low-level network implementation in Red Hat Enterprise Linux 6 features Generic Receive Offload (GRO) support. The GRO system increases the performance of inbound network connections by reducing the amount of processing done by the CPU. GRO implements the same technique as the Large Receive Offload (LRO) system, but can be applied to a wider range of transport layer protocols. 9.5. Wireless Support Red Hat Enterprise Linux 6 contains enhanced support for wireless networking and devices. Support for the wireless local area networking using the IEEE 802.11 set of standards has been improved, with added support for 802.11n based wireless networking. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/networking |
24.7.2. Useful Websites | 24.7.2. Useful Websites http://www.apache.org/ - The Apache Software Foundation . http://httpd.apache.org/docs-2.0/ - The Apache Software Foundation's documentation on Apache HTTP Server version 2.0, including the Apache HTTP Server Version 2.0 User's Guide . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/httpd_configuration_additional_resources-useful_websites |
Project APIs | Project APIs OpenShift Container Platform 4.12 Reference guide for project APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/project_apis/index |
Chapter 1. Red Hat Quay builds overview | Chapter 1. Red Hat Quay builds overview Red Hat Quay builds , or just builds , are a feature that enable the automation of container image builds. The builds feature uses worker nodes to build images from Dockerfiles or other build specifications. These builds can be triggered manually or automatically via webhooks from repositories like GitHub, allowing users to integrate continuous integration (CI) and continuous delivery (CD) pipelines into their workflow. The builds feature is supported on Red Hat Quay on OpenShift Container Platform and Kubernetes clusters. For Operator-based deployments and Kubernetes clusters, builds are created by using a build manager that coordinates and handles the build jobs. Builds support building Dockerfile on both bare metal platforms and on virtualized platforms with virtual builders . This versatility allows organizations to adapt to existing infrastructure while leveraging Red Hat Quay's container image build capabilities. The key features of Red Hat Quay builds feature include: Automated builds triggered by code commits or version control events Support for Docker and Podman container images Fine-grained control over build environments and resources Integration with Kubernetes and OpenShift Container Platform for scalable builds Compatibility with bare metal and virtualized infrastructure Note Running builds directly in a container on bare metal platforms does not have the same isolation as when using virtual machines, however, it still provides good protection. Builds are highly complex, and administrators are encouraged to review the Build automation architecture guide before continuing. 1.1. Building container images Building container images involves creating a blueprint for a containerized application. Blueprints rely on base images from other public repositories that define how the application should be installed and configured. Red Hat Quay supports the ability to build Docker and Podman container images. This functionality is valuable for developers and organizations who rely on container and container orchestration. 1.1.1. Build contexts When building an image with Docker or Podman, a directory is specified to become the build context . This is true for both manual Builds and Build triggers, because the Build that is created by is not different than running docker build or podman build on your local machine. Build contexts are always specified in the subdirectory from the Build setup, and fallback to the root of the Build source if a directory is not specified. When a build is triggered, Build workers clone the Git repository to the worker machine, and then enter the Build context before conducting a Build. For Builds based on .tar archives, Build workers extract the archive and enter the Build context. For example: Extracted Build archive example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile Imagine that the Extracted Build archive is the directory structure got a Github repository called example. If no subdirectory is specified in the Build trigger setup, or when manually starting the Build, the Build operates in the example directory. If a subdirectory is specified in the Build trigger setup, for example, subdir , only the Dockerfile within it is visible to the Build. This means that you cannot use the ADD command in the Dockerfile to add file , because it is outside of the Build context. Unlike Docker Hub, the Dockerfile is part of the Build context on As a result, it must not appear in the .dockerignore file. | [
"example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3/html/builders_and_image_automation/builds-overview |
Chapter 17. Account Console | Chapter 17. Account Console Red Hat Single Sign-On users can manage their accounts through the Account Console. Users can configure their profiles, add two-factor authentication, include identity provider accounts, and oversee device activity. Additional resources The Account Console can be configured in terms of appearance and language preferences. An example is adding attributes to the Personal info page by clicking Personal info link and completing and saving details. For more information, see reference:https://access.redhat.com/documentation/en-us/red_hat_single_sign-on/7.6/html-single/server_developer_guide/[Server Developer Guide]. 17.1. Accessing the Account Console Any user can access the Account Console. Procedure Make note of the realm name and IP address for the Red Hat Single Sign-On server where your account exists. In a web browser, enter a URL in this format: <server-root>/auth/realms/{realm-name}/account . Enter your login name and password. Account Console 17.2. Configuring ways to sign in You can sign in to this console using basic authentication (a login name and password) or two-factor authentication. For two-factor authentication, use one of the following procedures. 17.2.1. Two-factor authentication with OTP Prerequisites OTP is a valid authentication mechanism for your realm. Procedure Click Account security in the menu. Click Signing in . Click Set up authenticator application . Signing in Follow the directions that appear on the screen to use either FreeOTP or Google Authenticator on your mobile device as your OTP generator. Scan the QR code in the screen shot into the OTP generator on your mobile device. Log out and log in again. Respond to the prompt by entering an OTP that is provided on your mobile device. 17.2.2. Two-factor authentication with WebAuthn Prerequisites WebAuthn is a valid two-factor authentication mechanism for your realm. Please follow the WebAuthn section for more details. Procedure Click Account Security in the menu. Click Signing In . Click Set up Security Key . Signing In Prepare your WebAuthn Security Key. How you prepare this key depends on the type of WebAuthn security key you use. For example, for a USB based Yubikey, you may need to put your key into the USB port on your laptop. Click Register to register your security key. Log out and log in again. Assuming authentication flow was correctly set, a message appears asking you to authenticate with your Security Key as second factor. 17.2.3. Passwordless authentication with WebAuthn Prerequisites WebAuthn is a valid passwordless authentication mechanism for your realm. Please follow the Passwordless WebAuthn section for more details. Procedure Click Account Security in the menu. Click Signing In . Click Set up Security Key in the Passwordless section. Signing In Prepare your WebAuthn Security Key. How you prepare this key depends on the type of WebAuthn security key you use. For example, for a USB based Yubikey, you may need to put your key into the USB port on your laptop. Click Register to register your security key. Log out and log in again. Assuming authentication flow was correctly set, a message appears asking you to authenticate with your Security Key as second factor. You no longer need to provide your password to log in. 17.3. Viewing device activity You can view the devices that are logged in to your account. Procedure Click Account security in the menu. Click Device activity . Log out a device if it looks suspicious. Devices 17.4. Adding an identity provider account You can link your account with an identity broker . This option is often used to link social provider accounts. Procedure Log into the Admin Console. Click Identity providers in the menu. Select a provider and complete the fields. Return to the Account Console. Click Account security in the menu. Click Linked accounts . The identity provider you added appears in this page. Linked Accounts 17.5. Accessing other applications The Applications menu item shows users which applications you can access. In this case, only the Account Console is available. Applications | null | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/server_administration_guide/account-service |
Chapter 21. Requesting CRI-O and Kubelet profiling data by using the Node Observability Operator | Chapter 21. Requesting CRI-O and Kubelet profiling data by using the Node Observability Operator The Node Observability Operator collects and stores the CRI-O and Kubelet profiling data of worker nodes. You can query the profiling data to analyze the CRI-O and Kubelet performance trends and debug the performance related issues. Important The Node Observability Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 21.1. Workflow of the Node Observability Operator The following workflow outlines on how to query the profiling data using the Node Observability Operator: Install the Node Observability Operator in the OpenShift Container Platform cluster. Create a NodeObservability custom resource to enable the CRI-O profiling on the worker nodes of your choice. Run the profiling query to generate the profiling data. 21.2. Installing the Node Observability Operator The Node Observability Operator is not installed in OpenShift Container Platform by default. You can install the Node Observability Operator by using the OpenShift Container Platform CLI or the web console. 21.2.1. Installing the Node Observability Operator using the CLI You can install the Node Observability Operator by using the OpenShift CLI (oc). Prerequisites You have installed the OpenShift CLI (oc). You have access to the cluster with cluster-admin privileges. Procedure Confirm that the Node Observability Operator is available by running the following command: USD oc get packagemanifests -n openshift-marketplace node-observability-operator Example output NAME CATALOG AGE node-observability-operator Red Hat Operators 9h Create the node-observability-operator namespace by running the following command: USD oc new-project node-observability-operator Create an OperatorGroup object YAML file: cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-observability-operator namespace: node-observability-operator spec: targetNamespaces: [] EOF Create a Subscription object YAML file to subscribe a namespace to an Operator: cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-observability-operator namespace: node-observability-operator spec: channel: alpha name: node-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Verification View the install plan name by running the following command: USD oc -n node-observability-operator get sub node-observability-operator -o yaml | yq '.status.installplan.name' Example output install-dt54w Verify the install plan status by running the following command: USD oc -n node-observability-operator get ip <install_plan_name> -o yaml | yq '.status.phase' <install_plan_name> is the install plan name that you obtained from the output of the command. Example output COMPLETE Verify that the Node Observability Operator is up and running: USD oc get deploy -n node-observability-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE node-observability-operator-controller-manager 1/1 1 1 40h 21.2.2. Installing the Node Observability Operator using the web console You can install the Node Observability Operator from the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. In the Administrator's navigation panel, expand Operators OperatorHub . In the All items field, enter Node Observability Operator and select the Node Observability Operator tile. Click Install . On the Install Operator page, configure the following settings: In the Update channel area, click alpha . In the Installation mode area, click A specific namespace on the cluster . From the Installed Namespace list, select node-observability-operator from the list. In the Update approval area, select Automatic . Click Install . Verification In the Administrator's navigation panel, expand Operators Installed Operators . Verify that the Node Observability Operator is listed in the Operators list. 21.3. Creating the Node Observability custom resource You must create and run the NodeObservability custom resource (CR) before you run the profiling query. When you run the NodeObservability CR, it creates the necessary machine config and machine config pool CRs to enable the CRI-O profiling on the worker nodes. Important Creating a NodeObservability CR reboots all the worker nodes. It might take 10 or more minutes to complete. Note Kubelet profiling is enabled by default. The CRI-O unix socket of the node is mounted on the agent pod, which allows the agent to communicate with CRI-O to run the pprof request. Similarly, the kubelet-serving-ca certificate chain is mounted on the agent pod, which allows secure communication between the agent and node's kubelet endpoint. Prerequisites You have installed the Node Observability Operator. You have installed the OpenShift CLI (oc). You have access to the cluster with cluster-admin privileges. Procedure Log in to the OpenShift Container Platform CLI by running the following command: USD oc login -u kubeadmin https://<HOSTNAME>:6443 Switch back to the node-observability-operator namespace by running the following command: USD oc project node-observability-operator Create a CR file named nodeobservability.yaml that contains the following text: apiVersion: nodeobservability.olm.openshift.io/v1alpha1 kind: NodeObservability metadata: name: cluster 1 spec: labels: node-role.kubernetes.io/worker: "" type: crio-kubelet 1 You must specify the name as cluster because there should be only one NodeObservability CR per cluster. Run the NodeObservability CR: oc apply -f nodeobservability.yaml Example output nodeobservability.olm.openshift.io/cluster created Review the status of the NodeObservability CR by running the following command: USD oc get nob/cluster -o yaml | yq '.status.conditions' Example output conditions: conditions: - lastTransitionTime: "2022-07-05T07:33:54Z" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityMachineConfig ready: true' reason: Ready status: "True" type: Ready NodeObservability CR run is completed when the reason is Ready and the status is True . 21.4. Running the profiling query To run the profiling query, you must create a NodeObservabilityRun resource. The profiling query is a blocking operation that fetches CRI-O and Kubelet profiling data for a duration of 30 seconds. After the profiling query is complete, you must retrieve the profiling data inside the container file system /run/node-observability directory. Important You can request only one profiling query at any point of time. Prerequisites You have installed the Node Observability Operator. You have created the NodeObservability custom resource (CR). You have access to the cluster with cluster-admin privileges. Procedure Create a NodeObservabilityRun resource file named nodeobservabilityrun.yaml that contains the following text: apiVersion: nodeobservability.olm.openshift.io/v1alpha1 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun spec: nodeObservabilityRef: name: cluster Trigger the profiling query by running the NodeObservabilityRun resource: USD oc apply -f nodeobservabilityrun.yaml Review the status of the NodeObservabilityRun by running the following command: USD oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq '.status.conditions' Example output conditions: - lastTransitionTime: "2022-07-07T14:57:34Z" message: Ready to start profiling reason: Ready status: "True" type: Ready - lastTransitionTime: "2022-07-07T14:58:10Z" message: Profiling query done reason: Finished status: "True" type: Finished The profiling query is complete once the status is True and type is Finished . Retrieve the profiling data from the container's /run/node-observability path by running the following bash script: for a in USD(oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq .status.agents[].name); do echo "agent USD{a}" mkdir -p "/tmp/USD{a}" for p in USD(oc exec "USD{a}" -c node-observability-agent -- bash -c "ls /run/node-observability/*.pprof"); do f="USD(basename USD{p})" echo "copying USD{f} to /tmp/USD{a}/USD{f}" oc exec "USD{a}" -c node-observability-agent -- cat "USD{p}" > "/tmp/USD{a}/USD{f}" done done | [
"oc get packagemanifests -n openshift-marketplace node-observability-operator",
"NAME CATALOG AGE node-observability-operator Red Hat Operators 9h",
"oc new-project node-observability-operator",
"cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: node-observability-operator namespace: node-observability-operator spec: targetNamespaces: [] EOF",
"cat <<EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: node-observability-operator namespace: node-observability-operator spec: channel: alpha name: node-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc -n node-observability-operator get sub node-observability-operator -o yaml | yq '.status.installplan.name'",
"install-dt54w",
"oc -n node-observability-operator get ip <install_plan_name> -o yaml | yq '.status.phase'",
"COMPLETE",
"oc get deploy -n node-observability-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE node-observability-operator-controller-manager 1/1 1 1 40h",
"oc login -u kubeadmin https://<HOSTNAME>:6443",
"oc project node-observability-operator",
"apiVersion: nodeobservability.olm.openshift.io/v1alpha1 kind: NodeObservability metadata: name: cluster 1 spec: labels: node-role.kubernetes.io/worker: \"\" type: crio-kubelet",
"apply -f nodeobservability.yaml",
"nodeobservability.olm.openshift.io/cluster created",
"oc get nob/cluster -o yaml | yq '.status.conditions'",
"conditions: conditions: - lastTransitionTime: \"2022-07-05T07:33:54Z\" message: 'DaemonSet node-observability-ds ready: true NodeObservabilityMachineConfig ready: true' reason: Ready status: \"True\" type: Ready",
"apiVersion: nodeobservability.olm.openshift.io/v1alpha1 kind: NodeObservabilityRun metadata: name: nodeobservabilityrun spec: nodeObservabilityRef: name: cluster",
"oc apply -f nodeobservabilityrun.yaml",
"oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq '.status.conditions'",
"conditions: - lastTransitionTime: \"2022-07-07T14:57:34Z\" message: Ready to start profiling reason: Ready status: \"True\" type: Ready - lastTransitionTime: \"2022-07-07T14:58:10Z\" message: Profiling query done reason: Finished status: \"True\" type: Finished",
"for a in USD(oc get nodeobservabilityrun nodeobservabilityrun -o yaml | yq .status.agents[].name); do echo \"agent USD{a}\" mkdir -p \"/tmp/USD{a}\" for p in USD(oc exec \"USD{a}\" -c node-observability-agent -- bash -c \"ls /run/node-observability/*.pprof\"); do f=\"USD(basename USD{p})\" echo \"copying USD{f} to /tmp/USD{a}/USD{f}\" oc exec \"USD{a}\" -c node-observability-agent -- cat \"USD{p}\" > \"/tmp/USD{a}/USD{f}\" done done"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/using-node-observability-operator |
Chapter 1. Preface | Chapter 1. Preface Using automation execution environments to automate content within the Red Hat Ansible Automation Platform You can use Execution Environments as reproducible, portable, consistent and shareable container images. They control all of the dependencies of an Ansible Automation Platform job's runtime environment from system dependencies, Python dependencies, Ansible versions, and Ansible content in the form of Collections. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_creator_guide/preface |
Extension APIs | Extension APIs OpenShift Container Platform 4.18 Reference guide for extension APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/extension_apis/index |
Configuration and schema reference | Configuration and schema reference Red Hat Directory Server 12 Core server configuration attributes and server schema reference Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuration_and_schema_reference/index |
Chapter 2. Eclipse Temurin features | Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes that the latest OpenJDK 21 release of Eclipse Temurin includes, see OpenJDK 21.0.2 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 21.0.2 release: KEEPALIVE extended socket options support added on Windows On Windows 10 version 1709 or later platforms, the java.net.ExtendedSocketOptions class now supports the TCP_KEEPIDLE and TCP_KEEPINTERVAL options. Similarly, on Windows 10 version 1703 or later platforms, the java.net.ExtendedSocketOptions class now supports the TCP_KEEPCOUNT option. See JDK-8308593 (JDK Bug System) . Fixed potential JVM failures when using ZGC and a non-default ObjectAlignmentInBytes value In the initial release of OpenJDK 21, if you ran the JVM with the -XX:+UseZGC option and a non-default value for -XX:ObjectAlignmentInBytes , the JVM could fail or malfunction. OpenJDK 21.0.2 resolves this issue to ensure that you can successfully use the Z Garbage Collector (ZGC) and non-default values for Java object alignment when running the JVM. See JDK-8315082 (JDK Bug System) . Peak values for committed memory included in NMT reports In OpenJDK 21.0.2, Native Memory Tracking (NMT) reports now show the peak value for all categories. The peak value is the highest value for committed memory in a given NMT category over the lifetime of the JVM process. If the committed memory for a category is currently at its highest value, the NMT report shows an at peak value; otherwise, the NMT report shows the historic peak value. For example, the following report output shows that compiler arena memory peaked above 6 MB but is now approximately 200KB: See JDK-8317772 (JDK Bug System) . JVM warnings about unsupported THPs on Linux On Linux platforms, if Transparent Huge Pages (THPs) are requested but not supported, the JVM now prints the following message to standard output: See JDK-8313782 (JDK Bug System) . Let's Encrypt ISRG Root X2 CA certificate added In OpenJDK 21.0.2, the cacerts truststore includes the Internet Security Research Group (ISRG) Root X2 certificate authority (CA) certificate from Let's Encrypt: Name: Let's Encrypt Alias name: letsencryptisrgx2 Distinguished name: CN=ISRG Root X2, O=Internet Security Research Group, C=US See JDK-8317374 (JDK Bug System) . Digicert, Inc. root certificates added In OpenJDK 21.0.2, the cacerts truststore includes four Digicert, Inc. root certificates: Certificate 1 Name: DigiCert, Inc. Alias name: digicertcseccrootg5 Distinguished name: CN=DigiCert CS ECC P384 Root G5, O="DigiCert, Inc.", C=US Certificate 2 Name: DigiCert, Inc. Alias name: digicertcsrsarootg5 Distinguished name: CN=DigiCert CS RSA4096 Root G5, O="DigiCert, Inc.", C=US Certificate 3 Name: DigiCert, Inc. Alias name: digicerttlseccrootg5 Distinguished name: CN=DigiCert TLS ECC P384 Root G5, O="DigiCert, Inc.", C=US Certificate 4 Name: DigiCert, Inc. Alias name: digicerttlsrsarootg5 Distinguished name: CN=DigiCert TLS RSA4096 Root G5, O="DigiCert, Inc.", C=US See JDK-8318759 (JDK Bug System) . eMudhra Technologies Limited root certificates added In OpenJDK 21.0.2, the cacerts truststore includes three eMudhra Technologies Limited root certificates: Certificate 1 Name: eMudhra Technologies Limited Alias name: emsignrootcag1 Distinguished name: CN=emSign Root CA - G1, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN Certificate 2 Name: eMudhra Technologies Limited Alias name: emsigneccrootcag3 Distinguished name: CN=emSign ECC Root CA - G3, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN Certificate 3 Name: eMudhra Technologies Limited Alias name: emsignrootcag2 Distinguished name: CN=emSign Root CA - G2, O=eMudhra Technologies Limited, OU=emSign PKI, C=IN See JDK-8319187 (JDK Bug System) . Telia Root CA v2 certificate added In OpenJDK 21.0.2, the cacerts truststore includes the Telia Root CA v2 certificate: Name: Telia Root CA v2 Alias name: teliarootcav2 Distinguished name: CN=Telia Root CA v2, O=Telia Finland Oyj, C=FI See JDK-8317373 (JDK Bug System) . Revised on 2024-05-09 14:51:49 UTC | [
"Compiler (arena=196KB #4) (peak=6126KB #16)",
"UseTransparentHugePages disabled; transparent huge pages are not supported by the operating system."
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.2/openjdk-temurin-features-21-0-2_openjdk |
Chapter 15. Installing on VMC | Chapter 15. Installing on VMC 15.1. Installing a cluster on VMC In OpenShift Container Platform version 4.7, you can install a cluster on VMware vSphere by deploying it to VMware Cloud (VMC) on AWS . Once you have configured your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 15.1.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the Internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have Internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 15.1.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 15.1.2. vSphere prerequisites Provision block registry storage . For more information on persistent storage, see Understanding persistent storage . Review details about the OpenShift Container Platform installation and update processes. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 15.1.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 15.1.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 15.1. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 15.1.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 15.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 15.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 15.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 15.1.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 15.1. Roles and privileges required for installation vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone Folder.Create Folder.Delete Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 15.2. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 15.5. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 15.1.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.1.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.1.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 15.1.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Note Datastore and cluster names cannot exceed 60 characters; therefore, ensure the combined string length does not exceed the 60 character limit. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 15.1.11. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 15.1.11.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.1.11.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 15.1.11.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.1.12. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.1.13. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 15.1.13.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 15.1.13.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 15.1.13.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 15.1.13.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 15.1.14. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 15.1.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.1.16. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 15.2. Installing a cluster on VMC with customizations In OpenShift Container Platform version 4.7, you can install a cluster on your VMware vSphere instance using installer-provisioned infrastructure by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. To customize the OpenShift Container Platform installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 15.2.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the Internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have Internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 15.2.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 15.2.2. vSphere prerequisites Provision block registry storage . For more information on persistent storage, see Understanding persistent storage . Review details about the OpenShift Container Platform installation and update processes. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 15.2.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 15.2.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 15.6. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 15.2.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 15.7. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 15.8. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 15.9. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 15.2.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 15.3. Roles and privileges required for installation vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone Folder.Create Folder.Delete Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 15.4. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 15.10. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 15.2.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.2.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.2.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 15.2.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 15.2.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.2.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.11. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.2.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.12. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.2.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.13. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.2.10.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 15.14. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 15.2.10.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 15.15. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value is 1 Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 15.2.10.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 15.2.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.2.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 15.2.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 15.2.12.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.2.12.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 15.2.12.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.2.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.2.14. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 15.2.14.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 15.2.14.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 15.2.14.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 15.2.14.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 15.2.15. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 15.2.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.2.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 15.3. Installing a cluster on VMC with network customizations In OpenShift Container Platform version 4.7, you can install a cluster on your VMware vSphere instance using installer-provisioned infrastructure with customized network configuration options by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. By customizing your OpenShift Container Platform network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing VXLAN configurations. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 15.3.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the Internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have Internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 15.3.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 15.3.2. vSphere prerequisites Provision block registry storage . For more information on persistent storage, see Understanding persistent storage . Review details about the OpenShift Container Platform installation and update processes. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 15.3.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 15.3.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 15.16. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 15.3.5. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 15.17. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 15.18. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 15.19. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 15.3.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 15.5. Roles and privileges required for installation vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone Folder.Create Folder.Delete Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 15.6. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 15.20. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 15.3.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.3.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.3.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 15.3.10. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 15.3.10.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.3.10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.21. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.3.10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.22. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.3.10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.23. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.3.10.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 15.24. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 15.3.10.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 15.25. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value is 1 Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 15.3.10.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 15.3.10.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.3.11. Network configuration phases When specifying a cluster configuration prior to installation, there are several phases in the installation procedures when you can modify the network configuration: Phase 1 After entering the openshift-install create install-config command. In the install-config.yaml file, you can customize the following network-related fields: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to "Installation configuration parameters". Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Phase 2 After entering the openshift-install create manifests command. If you must specify advanced network configuration, during this phase you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the cluster network provider during phase 2. 15.3.12. Specifying advanced network configuration You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster. Important Modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and specify the advanced network configuration for your cluster, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. 15.3.13. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 15.3.13.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 15.26. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 This value is ready-only and specified in the install-config.yaml file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 This value is ready-only and specified in the install-config.yaml file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 15.27. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 15.28. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 15.29. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 15.30. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 15.3.14. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 15.3.15. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 15.3.15.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.3.15.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 15.3.15.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.3.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.3.17. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 15.3.17.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 15.3.17.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 15.3.17.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 15.3.17.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 15.3.18. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 15.3.19. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.3.20. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 15.4. Installing a cluster on VMC in a restricted network In OpenShift Container Platform version 4.7, you can install a cluster on VMware vSphere infrastructure in a restricted network by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 15.4.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Allocate two IP addresses, outside the DHCP range, and configure them with reverse DNS records. A DNS record for api.<cluster_name>.<base_domain> pointing to the allocated IP address. A DNS record for *.apps.<cluster_name>.<base_domain> pointing to the allocated IP address. Configure the following firewall rules: An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have Internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 15.4.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 15.4.2. vSphere prerequisites Create a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. Provision block registry storage . For more information on persistent storage, see Understanding persistent storage . Review details about the OpenShift Container Platform installation and update processes. If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 15.4.3. About installations in restricted networks In OpenShift Container Platform 4.7, you can perform an installation that does not require an active connection to the Internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less Internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the Internet and your closed network, or by using other methods that meet your restrictions. 15.4.3.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 15.4.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to obtain the images that are necessary to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 15.4.5. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 15.31. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 15.4.6. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 15.32. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 15.33. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 15.34. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 15.4.7. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 15.7. Roles and privileges required for installation vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.View vSphere vCenter Cluster Always Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone vSphere vCenter Datacenter If the installation program creates the virtual machine folder InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone Folder.Create Folder.Delete Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 15.8. Required permissions and propagation settings vSphere object Folder type Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Always True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing a OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion. Using Storage vMotion can cause issues and is not supported. To help ensure the uptime of your compute and control plane nodes, it is recommended that you follow the VMware best practices for vMotion. It is also recommended to use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using vSphere volumes in your pods, migrating a VM across datastores either manually or through Storage vMotion causes, invalid references within OpenShift Container Platform persistent volume (PV) objects. These references prevent affected pods from starting up and can result in data loss. Similarly, OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You must use DHCP for the network and ensure that the DHCP server is configured to provide persistent IP addresses to the cluster machines. All nodes must be in the same VLAN. You cannot scale the cluster using a second VLAN as a Day 2 operation. The VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Note It is recommended that each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server that is discoverable via DHCP. Installation is possible without an NTP server. However, asynchronous server clocks will cause errors, which NTP server prevents. Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 15.35. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 15.4.8. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.4.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 15.4.10. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network VMware vSphere environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.7 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image. Upload the image you downloaded to a location that is accessible from the bastion server. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 15.4.11. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Retrieve a Red Hat Enterprise Linux CoreOS (RHCOS) image and upload it to an accessible location. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the datacenter in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name must be the same one that you used in the DNS records that you configured. Paste the pull secret from the Red Hat OpenShift Cluster Manager . In the install-config.yaml file, set the value of platform.vsphere.clusterOSImage to the image location or name. For example: platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to provide the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry, which can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which look like this excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/release To complete these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 15.4.11.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. Important The openshift-install command does not validate field names for parameters. If an incorrect name is specified, the related file or object is not created, and no error is reported. Ensure that the field names for any parameters that are specified are correct. 15.4.11.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 15.36. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installer may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , openstack , ovirt , vsphere . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 15.4.11.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 15.37. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The cluster network provider Container Network Interface (CNI) plug-in to install. Either OpenShiftSDN or OVNKubernetes . The default value is OpenShiftSDN . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network providers support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. If you specify multiple IP kernel arguments, the machineNetwork.cidr value must be the CIDR of the primary network. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 15.4.11.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 15.38. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. For details, see the following "Machine-pool" table. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, heteregeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. For details, see the following "Machine-pool" table. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, heterogeneous clusters are not supported, so all pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information on CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Mint , Passthrough , Manual , or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key or keys to authenticate access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. One or more keys. For example: 15.4.11.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table: Table 15.39. Additional VMware vSphere cluster parameters Parameter Description Values platform.vsphere.vCenter The fully-qualified hostname or IP address of the vCenter server. String platform.vsphere.username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String platform.vsphere.password The password for the vCenter user name. String platform.vsphere.datacenter The name of the datacenter to use in the vCenter instance. String platform.vsphere.defaultDatastore The name of the default datastore to use for provisioning volumes. String platform.vsphere.folder Optional . The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the datacenter virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . platform.vsphere.network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String platform.vsphere.cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String platform.vsphere.apiVIP The virtual IP (VIP) address that you configured for control plane API access. An IP address, for example 128.0.0.1 . platform.vsphere.ingressVIP The virtual IP (VIP) address that you configured for cluster ingress. An IP address, for example 128.0.0.1 . 15.4.11.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table: Table 15.40. Optional VMware vSphere machine pool parameters Parameter Description Values platform.vsphere.clusterOSImage The location from which the installer downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . platform.vsphere.osDisk.diskSizeGB The size of the disk in gigabytes. Integer platform.vsphere.cpus The total number of virtual processor cores to assign a virtual machine. Integer platform.vsphere.coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value is 1 Integer platform.vsphere.memoryMB The size of a virtual machine's memory in megabytes. Integer 15.4.11.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-48.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 7 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 8 The cluster name that you specified in your DNS records. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. 10 The location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that is accessible from the bastion server. 11 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 12 Provide the contents of the certificate file that you used for your mirror registry. 13 Provide the imageContentSources section from the output of the command to mirror the repository. 15.4.11.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.4.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Important Use the openshift-install command from the bastion hosted in the VMC environment. Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "4vYBz-Ee6gm-ymBZj-Wt5AL" INFO Time elapsed: 36m22s Note The cluster access and credential information also outputs to <installation_directory>/.openshift_install.log when an installation succeeds. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Important You must not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. 15.4.13. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 15.4.13.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.4.13.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 15.4.13.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.4.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.4.15. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 15.4.16. Creating registry storage After you install the cluster, you must create storage for the Registry Operator. 15.4.16.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 15.4.16.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 15.4.16.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 15.4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.4.18. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . 15.5. Installing a cluster on VMC with user-provisioned infrastructure In OpenShift Container Platform version 4.7, you can install a cluster on VMware vSphere infrastructure that you provision by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 15.5.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the Internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have Internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 15.5.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 15.5.2. vSphere prerequisites Provision block registry storage . For more information on persistent storage, see Understanding persistent storage . Review details about the OpenShift Container Platform installation and update processes. If you use a firewall, you must configure it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 15.5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 15.5.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 15.41. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 15.5.5. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 15.5.5.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: One temporary bootstrap machine Three control plane, or master, machines At least two compute machines, which are also known as worker machines. Note The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 15.5.5.2. Network connectivity requirements All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. The machines are configured with static IP addresses. No DHCP server is required. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. 15.5.5.3. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. 15.5.5.4. Minimum resource requirements Each cluster machine must meet the following minimum requirements: Table 15.42. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 15.5.5.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 15.5.6. Creating the user-provisioned infrastructure Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure. Prerequisites Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Provision the required load balancers. Configure the ports for your machines. Configure DNS. Ensure network connectivity. 15.5.6.1. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. Ensure that the machines have persistent IP addresses and host names. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Table 15.43. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 15.44. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 15.45. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 15.46. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 15.47. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . 15.5.6.2. User-provisioned DNS requirements DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 15.48. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Bootstrap bootstrap.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Master hosts <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster. Worker hosts <worker><n>.<cluster_name>.<base_domain>. Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Tip You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records. The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another. Example 15.9. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF The following example BIND zone file shows sample PTR records for reverse name resolution. Example 15.10. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is "last octet" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF 15.5.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster's machines. 15.5.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.5.9. Manually creating the installation configuration file For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file. Prerequisites Obtain the OpenShift Container Platform installation program and the access token for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the following install-config.yaml file template and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 15.5.9.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 The pull secret that you obtained from OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 15.5.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.5.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. + Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. The following files are generated in the directory: 15.5.11. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware Cloud on AWS. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 15.5.12. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 15.5.13. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 15.5.14. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers , the unit must be named var-lib-containers.mount . 5 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 15.5.15. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 15.5.16. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.7. Download and install the new version of oc . 15.5.16.1. Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Linux Client entry and save the file. Unpack the archive: USD tar xvzf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.5.16.2. Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 15.5.16.3. Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.7 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 15.5.17. Creating the cluster To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Create the required infrastructure for the cluster. You obtained the installation program and generated the Ignition config files for your cluster. You used the Ignition config files to create RHCOS machines for your cluster. Your machines have direct Internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself. 15.5.18. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.5.19. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 15.5.20. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Configure the Operators that are not available. 15.5.20.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 15.5.20.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 15.5.20.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 15.5.20.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 15.5.20.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 15.5.21. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation configuration documentation for more information. All the worker nodes are restarted. To monitor the process, enter the following command: USD oc get nodes -w Note If you have additional machine types such as infrastructure nodes, repeat the process for these types. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 15.5.22. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 15.5.23. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.5.24. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 15.6. Installing a cluster on VMC with user-provisioned infrastructure and network customizations In OpenShift Container Platform version 4.7, you can install a cluster on your VMware vSphere instance using infrastructure you provision with customized network configuration options by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 15.6.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Configure the following firewall rules: An ANY:ANY firewall rule between the OpenShift Container Platform compute network and the Internet. This is used by nodes and applications to download container images. An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have Internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 15.6.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 15.6.2. vSphere prerequisites Provision block registry storage . For more information on persistent storage, see Understanding persistent storage . Review details about the OpenShift Container Platform installation and update processes. If you use a firewall, you must configure it to access Red Hat Insights . 15.6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 15.6.4. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 15.49. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 15.6.5. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 15.6.5.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: One temporary bootstrap machine Three control plane, or master, machines At least two compute machines, which are also known as worker machines. Note The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 15.6.5.2. Network connectivity requirements All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. The machines are configured with static IP addresses. No DHCP server is required. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. 15.6.5.3. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. 15.6.5.4. Minimum resource requirements Each cluster machine must meet the following minimum requirements: Table 15.50. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 15.6.5.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 15.6.6. Creating the user-provisioned infrastructure Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure. Prerequisites Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Provision the required load balancers. Configure the ports for your machines. Configure DNS. Ensure network connectivity. 15.6.6.1. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. Ensure that the machines have persistent IP addresses and host names. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Table 15.51. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 15.52. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 15.53. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 15.54. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 15.55. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . 15.6.6.2. User-provisioned DNS requirements DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 15.56. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Bootstrap bootstrap.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Master hosts <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster. Worker hosts <worker><n>.<cluster_name>.<base_domain>. Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Tip You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records. The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another. Example 15.11. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF The following example BIND zone file shows sample PTR records for reverse name resolution. Example 15.12. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is "last octet" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF 15.6.7. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 15.6.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program for your operating system, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 15.6.9. Manually creating the installation configuration file For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file. Prerequisites Obtain the OpenShift Container Platform installation program and the access token for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the following install-config.yaml file template and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 15.6.9.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 The pull secret that you obtained from OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 15.6.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.6.10. Specifying advanced network configuration You can use advanced configuration customization to integrate your cluster into your existing network environment by specifying additional configuration for your cluster network provider. You can specify advanced network configuration only before you install the cluster. Important Modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites Create the install-config.yaml file and complete any modifications to it. Create the Ignition config files for your cluster. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and specify the advanced network configuration for your cluster, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Remove the Kubernetes manifest files that define the control plane machines and compute machineSets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 15.6.11. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 15.6.11.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 15.57. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 This value is ready-only and specified in the install-config.yaml file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 This value is ready-only and specified in the install-config.yaml file. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 15.58. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 15.59. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 15.60. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expected it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . This value cannot be changed after cluster installation. genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. This value cannot be changed after cluster installation. Example OVN-Kubernetes configuration defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 15.61. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 15.6.12. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 15.6.13. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware Cloud on AWS. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 15.6.14. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 15.6.15. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 15.6.16. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers , the unit must be named var-lib-containers.mount . 5 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 15.6.17. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 15.6.18. Creating the cluster To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Create the required infrastructure for the cluster. You obtained the installation program and generated the Ignition config files for your cluster. You used the Ignition config files to create RHCOS machines for your cluster. Your machines have direct Internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself. 15.6.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.6.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 15.6.21. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Configure the Operators that are not available. 15.6.21.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . Note The Prometheus console provides an ImageRegistryRemoved alert, for example: "Image Registry has been removed. ImageStreamTags , BuildConfigs and DeploymentConfigs which reference ImageStreamTags may not work as expected. Please configure storage and update the config to Managed state by editing configs.imageregistry.operator.openshift.io." 15.6.21.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 15.6.21.2.1. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 15.6.22. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation configuration documentation for more information. All the worker nodes are restarted. To monitor the process, enter the following command: USD oc get nodes -w Note If you have additional machine types such as infrastructure nodes, repeat the process for these types. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 15.6.23. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 15.6.24. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.6.25. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 15.7. Installing a cluster on VMC in a restricted network with user-provisioned infrastructure In OpenShift Container Platform version 4.7, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network by deploying it to VMware Cloud (VMC) on AWS . Once you configure your VMC environment for OpenShift Container Platform deployment, you use the OpenShift Container Platform installation program from the bastion management host, co-located in the VMC environment. The installation program and control plane automates the process of deploying and managing the resources needed for the OpenShift Container Platform cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 15.7.1. Setting up VMC for vSphere You can install OpenShift Container Platform on VMware Cloud (VMC) on AWS hosted vSphere clusters to enable applications to be deployed and managed both on-premise and off-premise, across the hybrid cloud. You must configure several options in your VMC environment prior to installing OpenShift Container Platform on VMware vSphere. Ensure your VMC environment has the following prerequisites: Create a non-exclusive, DHCP-enabled, NSX-T network segment and subnet. Other virtual machines (VMs) can be hosted on the subnet, but at least eight IP addresses must be available for the OpenShift Container Platform deployment. Configure the following firewall rules: An ANY:ANY firewall rule between the installation host and the software-defined data center (SDDC) management network on port 443. This allows you to upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA during deployment. An HTTPS firewall rule between the OpenShift Container Platform compute network and vCenter. This connection allows OpenShift Container Platform to communicate with vCenter for provisioning and managing nodes, persistent volume claims (PVCs), and other resources. You must have the following information to deploy OpenShift Container Platform: The OpenShift Container Platform cluster name, such as vmc-prod-1 . The base DNS name, such as companyname.com . If not using the default, the pod network CIDR and services network CIDR must be identified, which are set by default to 10.128.0.0/14 and 172.30.0.0/16 , respectively. These CIDRs are used for pod-to-pod and pod-to-service communication and are not accessible externally; however, they must not overlap with existing subnets in your organization. The following vCenter information: vCenter hostname, username, and password Datacenter name, such as SDDC-Datacenter Cluster name, such as Cluster-1 Network name Datastore name, such as WorkloadDatastore Note It is recommended to move your vSphere cluster to the VMC Compute-ResourcePool resource pool after your cluster installation is finished. A Linux-based host deployed to VMC as a bastion. The bastion host can be Red Hat Enterprise Linux (RHEL) or any another Linux-based host; it must have Internet connectivity and the ability to upload an OVA to the ESXi hosts. Download and install the OpenShift CLI tools to the bastion host. The openshift-install installation program The OpenShift CLI ( oc ) tool Note You cannot use the VMware NSX Container Plugin for Kubernetes (NCP), and NSX is not used as the OpenShift SDN. The version of NSX currently available with VMC is incompatible with the version of NCP certified with OpenShift Container Platform. However, the NSX DHCP service is used for virtual machine IP management with the full-stack automated OpenShift Container Platform deployment and with nodes provisioned, either manually or automatically, by the Machine API integration with vSphere. Additionally, NSX firewall rules are created to enable access with the OpenShift Container Platform cluster and between the bastion host and the VMC vSphere hosts. 15.7.1.1. VMC Sizer tool VMware Cloud on AWS is built on top of AWS bare metal infrastructure; this is the same bare metal infrastructure which runs AWS native services. When a VMware cloud on AWS software-defined data center (SDDC) is deployed, you consume these physical server nodes and run the VMware ESXi hypervisor in a single tenant fashion. This means the physical infrastructure is not accessible to anyone else using VMC. It is important to consider how many physical hosts you will need to host your virtual infrastructure. To determine this, VMware provides the VMC on AWS Sizer . With this tool, you can define the resources you intend to host on VMC: Types of workloads Total number of virtual machines Specification information such as: Storage requirements vCPUs vRAM Overcommit ratios With these details, the sizer tool can generate a report, based on VMware best practices, and recommend your cluster configuration and the number of hosts you will need. 15.7.2. vSphere prerequisites Create a registry on your mirror host and obtain the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. Provision block registry storage . For more information on persistent storage, see Understanding persistent storage . Review details about the OpenShift Container Platform installation and update processes. If you use a firewall and plan to use telemetry, you must configure the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 15.7.3. About installations in restricted networks In OpenShift Container Platform 4.7, you can perform an installation that does not require an active connection to the Internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less Internet access for an installation on bare metal hardware or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift Container Platform registry and contains the installation media. You can create this registry on a mirror host, which can access both the Internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 15.7.3.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 15.7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.7, you require access to the Internet to obtain the images that are necessary to install your cluster. You must have Internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct Internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the content that is required and use it to populate a mirror registry with the packages that you need to install a cluster and generate the installation program. With some installation types, the environment that you install your cluster in will not require Internet access. Before you update the cluster, you update the content of the mirror registry. 15.7.5. VMware vSphere infrastructure requirements You must install the OpenShift Container Platform cluster on a VMware vSphere version 6 or 7 instance that meets the requirements for the components that you use. Table 15.62. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 6.5 and later with HW version 13 This version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. See the Red Hat Enterprise Linux 8 supported hypervisors list . Storage with in-tree drivers vSphere 6.5 and later This plug-in creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. If you use a vSphere version 6.5 instance, consider upgrading to 6.7U3 or 7.0 before you install OpenShift Container Platform. Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Important Virtual machines (VMs) configured to use virtual hardware version 14 or greater might result in a failed installation. It is recommended to configure VMs with virtual hardware version 13. This is a known issue that is being addressed in BZ#1935539 . 15.7.6. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. 15.7.6.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: One temporary bootstrap machine Three control plane, or master, machines At least two compute machines, which are also known as worker machines. Note The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Important To improve high availability of your cluster, distribute the control plane machines over different z/VM instances on at least two physical machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL) 7.9. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 15.7.6.2. Network connectivity requirements All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config files from the Machine Config Server. The machines are configured with static IP addresses. No DHCP server is required. Additionally, each OpenShift Container Platform node in the cluster must have access to a Network Time Protocol (NTP) server. 15.7.6.3. IBM Z network connectivity requirements To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: A direct-attached OSA or RoCE network adapter A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. 15.7.6.4. Minimum resource requirements Each cluster machine must meet the following minimum requirements: Table 15.63. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 15.7.6.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 15.7.7. Creating the user-provisioned infrastructure Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure. Prerequisites Review the OpenShift Container Platform 4.x Tested Integrations page before you create the supporting infrastructure for your cluster. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Provision the required load balancers. Configure the ports for your machines. Configure DNS. Ensure network connectivity. 15.7.7.1. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require network in initramfs during boot to fetch Ignition config from the machine config server. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. Ensure that the machines have persistent IP addresses and host names. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. You must configure the network connectivity between machines to allow cluster components to communicate. Each machine must be able to resolve the host names of all other machines in the cluster. Table 15.64. All machines to all machines Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN and Geneve 6081 VXLAN and Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . TCP/UDP 30000 - 32767 Kubernetes node port Table 15.65. All machines to control plane Protocol Port Description TCP 6443 Kubernetes API Table 15.66. Control plane machines to control plane machines Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Network topology requirements The infrastructure that you provision for your cluster must meet the following network topology requirements. Important OpenShift Container Platform requires all nodes to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Load balancers Before you install OpenShift Container Platform, you must provision two load balancers that meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the API routes. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configure the following ports on both the front and back of the load balancers: Table 15.67. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an Ingress point for application traffic flowing in from outside the cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP, SSL Passthrough, or SSL Bridge mode. If you use SSL Bridge mode, you must enable Server Name Indication (SNI) for the Ingress routes. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Configure the following ports on both the front and back of the load balancers: Table 15.68. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress router pods, compute, or worker, by default. X X HTTP traffic Tip If the true IP address of the client can be seen by the load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Note A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. You must configure the Ingress router after the control plane initializes. NTP configuration OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . 15.7.7.2. User-provisioned DNS requirements DNS is used for name resolution and reverse name resolution. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the host name for all the nodes. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for an OpenShift Container Platform cluster that uses user-provisioned infrastructure. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 15.69. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the load balancer for the control plane machines. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the host names that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. Add a wildcard DNS A/AAAA or CNAME record that refers to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Bootstrap bootstrap.<cluster_name>.<base_domain>. Add a DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Master hosts <master><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes (also known as the master nodes). These records must be resolvable by the nodes within the cluster. Worker hosts <worker><n>.<cluster_name>.<base_domain>. Add DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Tip You can use the nslookup <hostname> command to verify name resolution. You can use the dig -x <ip_address> command to verify reverse name resolution for the PTR records. The following example of a BIND zone file shows sample A records for name resolution. The purpose of the example is to show the records that are needed. The example is not meant to provide advice for choosing one name resolution service over another. Example 15.13. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF The following example BIND zone file shows sample PTR records for reverse name resolution. Example 15.14. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is "last octet" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF 15.7.8. Generating an SSH private key and adding it to the agent If you want to perform installation debugging or disaster recovery on your cluster, you must provide an SSH key to both your ssh-agent and the installation program. You can use this key to access the bootstrap machine in a public cluster to troubleshoot installation issues. Note In a production environment, you require disaster recovery and debugging. Important Do not skip this procedure in production environments where disaster recovery and debugging is required. You can use this key to SSH into the master nodes as the user core . When you deploy the cluster, the key is added to the core user's ~/.ssh/authorized_keys list. Procedure If you do not have an SSH key that is configured for password-less authentication on your computer, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' \ -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_rsa , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Running this command generates an SSH key that does not require a password in the location that you specified. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS Validated / Modules in Process cryptographic libraries on the x86_64 architecture, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. Start the ssh-agent process as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_rsa steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide this key to your cluster's machines. 15.7.9. Manually creating the installation configuration file For installations of OpenShift Container Platform that use user-provisioned infrastructure, you manually generate your installation configuration file. Prerequisites Obtain the OpenShift Container Platform installation program and the access token for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the following install-config.yaml file template and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 15.7.9.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 14 fips: false 15 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 3 6 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Your machines must use at least 8 CPUs and 32 GB of RAM if you disable simultaneous multithreading. 4 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 7 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 The fully-qualified hostname or IP address of the vCenter server. 10 The name of the user for accessing the server. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. 11 The password associated with the vSphere user. 12 The vSphere datacenter. 13 The default vSphere datastore to use. 14 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster, omit this parameter. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. 16 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 18 Provide the contents of the certificate file that you used for your mirror registry. 19 Provide the imageContentSources section from the output of the command to mirror the repository. 15.7.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the Internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- ... 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Note The installation program does not support the proxy readinessEndpoints field. Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 15.7.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to make its machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to create the cluster. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines and compute machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. + Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. The following files are generated in the directory: 15.7.11. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware Cloud on AWS. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 15.7.12. Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere Before you install a cluster that contains user-provisioned infrastructure on VMware vSphere, you must create RHCOS machines on vSphere hosts for it to use. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your datacenter to store your VMs. Click the VMs and Templates view. Right-click the name of your datacenter. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that machine sets can apply configurations to. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before booting a VM from an OVA in vSphere: USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Optional: In the event of cluster performance issues, from the Latency Sensitivity list, select High . Ensure that your VM's CPU and memory reservation have the following values: Memory reservation value must be equal to its configured memory size. CPU reservation value must be at least the number of low latency virtual CPUs multiplied by the measured physical CPU speed. Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the configuration and power on the VM. Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 15.7.13. Creating more Red Hat Enterprise Linux CoreOS (RHCOS) machines in vSphere You can create more compute machines for your cluster that uses user-provisioned infrastructure on VMware vSphere. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure After the template deploys, deploy a VM for a machine in the cluster. Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your datacenter. Optional: On the Select storage tab, customize the storage options. On the Select clone options , select Customize this virtual machine's hardware . On the Customize hardware tab, click VM Options Advanced . From the Latency Sensitivity list, select High . Click Edit Configuration , and on the Configuration Parameters window, click Add Configuration Params . Define the following parameter names and values: guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Also, make sure to select the correct network under Add network adapter if there are multiple networks available. Complete the configuration and power on the VM. Continue to create more compute machines for your cluster. 15.7.14. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important Kubernetes supports only two filesystem partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a MachineConfig object and add it to a file in the openshift directory. For example, name the file 98-var-partition.yaml , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The name of the mount unit must match the directory specified in the Where= directive. For example, for a filesystem mounted on /var/lib/containers , the unit must be named var-lib-containers.mount . 5 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 15.7.15. Updating the bootloader using bootupd To update the bootloader by using bootupd , you must either install bootupd on RHCOS machines manually or provide a machine config with the enabled systemd unit. Unlike grubby or other bootloader tools, bootupd does not manage kernel space configuration such as passing kernel arguments. After you have installed bootupd , you can manage it remotely from the OpenShift Container Platform cluster. Note It is recommended that you use bootupd only on bare metal or virtualized hypervisor installations, such as for protection against the BootHole vulnerability. Manual install method You can manually install bootupd by using the bootctl command-line tool. Inspect the system status: # bootupctl status Example output Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version RHCOS images created without bootupd installed on them require an explicit adoption phase. If the system status is Adoptable , perform the adoption: # bootupctl adopt-and-update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 If an update is available, apply the update so that the changes take effect on the reboot: # bootupctl update Example output Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Machine config method Another way to enable bootupd is by providing a machine config. Provide a machine config file with the enabled systemd unit, as shown in the following example: Example output variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target 15.7.16. Creating the cluster To create the OpenShift Container Platform cluster, you wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Create the required infrastructure for the cluster. You obtained the installation program and generated the Ignition config files for your cluster. You used the Ignition config files to create RHCOS machines for your cluster. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the machine itself. 15.7.17. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 15.7.18. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 15.7.19. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Configure the Operators that are not available. 15.7.19.1. Disabling the default OperatorHub sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Global Configuration OperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources. 15.7.19.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 15.7.19.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Container Storage. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When using shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 15.7.19.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 15.7.19.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure To set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 Creating a custom PVC allows you to leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring registry storage for VMware vSphere . 15.7.20. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation configuration documentation for more information. All the worker nodes are restarted. To monitor the process, enter the following command: USD oc get nodes -w Note If you have additional machine types such as infrastructure nodes, repeat the process for these types. Register your cluster on the Cluster registration page. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 15.7.21. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 15.7.22. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.7, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 15.7.23. steps Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. 15.8. Uninstalling a cluster on VMC You can remove a cluster installed on VMware vSphere infrastructure that you deployed to VMware Cloud (VMC) on AWS by using installer-provisioned infrastructure. 15.8.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites Have a copy of the installation program that you used to deploy the cluster. Have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OpenShiftSDN serviceNetwork: - 172.30.0.0/16 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.example.com/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.example.com/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"sshKey: <key1> <key2> <key3>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: vsphere: 4 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: vsphere: 7 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 8 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder network: VM_Network cluster: vsphere_cluster_name 9 apiVIP: api_vip ingressVIP: ingress_vip clusterOSImage: http://mirror.example.com/images/rhcos-48.83.202103221318-0-vmware.x86_64.ova 10 fips: false pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 11 sshKey: 'ssh-ed25519 AAAA...' additionalTrustBundle: | 12 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 13 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"4vYBz-Ee6gm-ymBZj-Wt5AL\" INFO Time elapsed: 36m22s",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is \"last octet\" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"tar xvzf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc get nodes -w",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is \"last octet\" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc get nodes -w",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1 IN A 192.168.1.5 smtp IN A 192.168.1.5 ; helper IN A 192.168.1.5 helper.ocp4 IN A 192.168.1.5 ; ; The api identifies the IP of your load balancer. api.ocp4 IN A 192.168.1.5 api-int.ocp4 IN A 192.168.1.5 ; ; The wildcard also identifies the load balancer. *.apps.ocp4 IN A 192.168.1.5 ; ; Create an entry for the bootstrap host. bootstrap.ocp4 IN A 192.168.1.96 ; ; Create entries for the master hosts. master0.ocp4 IN A 192.168.1.97 master1.ocp4 IN A 192.168.1.98 master2.ocp4 IN A 192.168.1.99 ; ; Create entries for the worker hosts. worker0.ocp4 IN A 192.168.1.11 worker1.ocp4 IN A 192.168.1.7 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; ; The syntax is \"last octet\" and the host must have an FQDN ; with a trailing dot. 97 IN PTR master0.ocp4.example.com. 98 IN PTR master1.ocp4.example.com. 99 IN PTR master2.ocp4.example.com. ; 96 IN PTR bootstrap.ocp4.example.com. ; 5 IN PTR api.ocp4.example.com. 5 IN PTR api-int.ocp4.example.com. ; 11 IN PTR worker0.ocp4.example.com. 7 IN PTR worker1.ocp4.example.com. ; ;EOF",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: - hyperthreading: Enabled 2 3 name: worker replicas: 0 4 controlPlane: hyperthreading: Enabled 5 6 name: master replicas: 3 7 metadata: name: test 8 platform: vsphere: vcenter: your.vcenter.server 9 username: username 10 password: password 11 datacenter: datacenter 12 defaultDatastore: datastore 13 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition spec: config: ignition: version: 3.2.0 storage: disks: - device: /dev/<device_name> 1 partitions: - label: var startMiB: <partition_start_offset> 2 sizeMiB: <partition_size> 3 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs systemd: units: - name: var.mount 4 enabled: true contents: | [Unit] Before=local-fs.target [Mount] What=/dev/disk/by-partlabel/var Where=/var Options=defaults,prjquota 5 [Install] WantedBy=local-fs.target",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"# bootupctl status",
"Component EFI Installed: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64 Update: At latest version",
"# bootupctl adopt-and-update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"# bootupctl update",
"Updated: grub2-efi-x64-1:2.04-31.fc33.x86_64,shim-x64-15-8.x86_64",
"variant: rhcos version: 1.1.0 systemd: units: - name: custom-bootupd-auto.service enabled: true contents: | [Unit] Description=Bootupd automatic update [Service] ExecStart=/usr/bin/bootupctl update RemainAfterExit=yes [Install] WantedBy=multi-user.target",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.20.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.7.0 True False False 3h56m baremetal 4.7.0 True False False 29h cloud-credential 4.7.0 True False False 29h cluster-autoscaler 4.7.0 True False False 29h config-operator 4.7.0 True False False 6h39m console 4.7.0 True False False 3h59m csi-snapshot-controller 4.7.0 True False False 4h12m dns 4.7.0 True False False 4h15m etcd 4.7.0 True False False 29h image-registry 4.7.0 True False False 3h59m ingress 4.7.0 True False False 4h30m insights 4.7.0 True False False 29h kube-apiserver 4.7.0 True False False 29h kube-controller-manager 4.7.0 True False False 29h kube-scheduler 4.7.0 True False False 29h kube-storage-version-migrator 4.7.0 True False False 4h2m machine-api 4.7.0 True False False 29h machine-approver 4.7.0 True False False 6h34m machine-config 4.7.0 True False False 3h56m marketplace 4.7.0 True False False 4h2m monitoring 4.7.0 True False False 6h31m network 4.7.0 True False False 29h node-tuning 4.7.0 True False False 4h30m openshift-apiserver 4.7.0 True False False 3h56m openshift-controller-manager 4.7.0 True False False 4h36m openshift-samples 4.7.0 True False False 4h30m operator-lifecycle-manager 4.7.0 True False False 29h operator-lifecycle-manager-catalog 4.7.0 True False False 29h operator-lifecycle-manager-packageserver 4.7.0 True False False 3h59m service-ca 4.7.0 True False False 29h storage 4.7.0 True False False 4h30m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc get nodes -w",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/installing/installing-on-vmc |
Chapter 9. Gathering the observability data from multiple clusters | Chapter 9. Gathering the observability data from multiple clusters For a multicluster configuration, you can create one OpenTelemetry Collector instance in each one of the remote clusters and then forward all the telemetry data to one OpenTelemetry Collector instance. Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack instance is deployed on the cluster. The following mounted certificates: Issuer, self-signed certificate, CA issuer, client and server certificates. To create any of these certificates, see step 1. Procedure Mount the following certificates in the OpenTelemetry Collector instance, skipping already mounted certificates. An Issuer to generate the certificates by using the cert-manager Operator for Red Hat OpenShift. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {} A self-signed certificate. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io A CA issuer. apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret The client and server certificates. apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - "otel.observability.svc.cluster.local" 2 issuerRef: name: ca-issuer 1 List of exact DNS names to be mapped to a solver in the server OpenTelemetry Collector instance. 2 List of exact DNS names to be mapped to a solver in the client OpenTelemetry Collector instance. Create a service account for the OpenTelemetry Collector instance. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespace resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the edge clusters. Example OpenTelemetryCollector custom resource for the edge clusters apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector exporter is configured to export OTLP HTTP and points to the OpenTelemetry Collector from the central cluster. Create the YAML file to define the OpenTelemetryCollector custom resource (CR) in the central cluster. Example OpenTelemetryCollector custom resource for the central cluster apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: "deployment" ingress: type: route route: termination: "passthrough" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: "tempo-<simplest>-distributor:4317" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs 1 The Collector receiver requires the certificates listed in the first step. 2 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, which in this example is "tempo-simplest-distributor:4317" and already created. | [
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-issuer spec: selfSigned: {}",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ca spec: isCA: true commonName: ca subject: organizations: - <your_organization_name> organizationalUnits: - Widgets secretName: ca-secret privateKey: algorithm: ECDSA size: 256 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io",
"apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: test-ca-issuer spec: ca: secretName: ca-secret",
"apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: server spec: secretName: server-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 1 issuerRef: name: ca-issuer --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: client spec: secretName: client-tls isCA: false usages: - server auth - client auth dnsNames: - \"otel.observability.svc.cluster.local\" 2 issuerRef: name: ca-issuer",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-<example> roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: otel-collector-<example> spec: mode: daemonset serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} opencensus: otlp: protocols: grpc: {} http: {} zipkin: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlphttp: endpoint: https://observability-cluster.com:443 1 tls: insecure: false cert_file: /certs/server.crt key_file: /certs/server.key ca_file: /certs/ca.crt service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs",
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otlp-receiver namespace: observability spec: mode: \"deployment\" ingress: type: route route: termination: \"passthrough\" config: receivers: otlp: protocols: http: tls: 1 cert_file: /certs/server.crt key_file: /certs/server.key client_ca_file: /certs/ca.crt exporters: otlp: endpoint: \"tempo-<simplest>-distributor:4317\" 2 tls: insecure: true service: pipelines: traces: receivers: [otlp] processors: [] exporters: [otlp] volumes: - name: otel-certs secret: name: otel-certs volumeMounts: - name: otel-certs mountPath: /certs"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/red_hat_build_of_opentelemetry/otel-gathering-observability-data-from-multiple-clusters |
Chapter 6. Resolved issues | Chapter 6. Resolved issues Red Hat 3scale API Management 2.15 resolves the following issues: Table 6.1. Resolved issues Issue number Description THREESCALE-1752 Messages counter in the Dashboard is not correct THREESCALE-5225 Upstream cannot be null error in APIcast logs THREESCALE-7491 Member user in Analytics group getting Access Denied when accessing the Backend analytics THREESCALE-7570 Unable to delete application key when it has a dot in the key name THREESCALE-8383 Need to redeploy Sidekiq when updating Email Templates THREESCALE-9437 Internal Error occurs when changing password for users in member role THREESCALE-9537 Batcher Policy storage ran out of memory THREESCALE-9711 Provide a way to get 3scale patch version details from 3scale installation THREESCALE-10065 Messages deleted from Sent mailbox are deleted immediately instead of being moved to Trash THREESCALE-10147 Finalized option missing in dropdown for Invoice API THREESCALE-10212 Broken "Buyer Account approved" email template THREESCALE-10238 The Operator reconciles a Product created with the ProductCR when mapping rules are duplicated THREESCALE-10259 Special character "#" is breaking the Downalod CSV functionality on the Analytics page THREESCALE-10308 JWT claim check policy fails to match resource and skips the check THREESCALE-10582 Upstream timeouts don't work with Camel Service THREESCALE-10752 Account edit form in admin portal incorrectly rendering country field THREESCALE-10792 The operator doesn't delete the objects in the correct order THREESCALE-10934 Batcher policy does not accept the same chars specified in Porta regex for app_id, app_key & user_key THREESCALE-11183 Backend CR can't be created when using wss protocol | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/release_notes_for_red_hat_3scale_api_management_2.15_on-premises/resolved_issues |
Chapter 1. Quarkus CXF security guide | Chapter 1. Quarkus CXF security guide This chapter provides information about security when working with Quarkus CXF extensions. 1.1. Security guide The security guide documents various security related aspects of Quarkus CXF: SSL, TLS and HTTPS Authentication and authorization Authentication enforced by WS-SecurityPolicy 1.1.1. SSL, TLS and HTTPS This section documents various use cases related to SSL, TLS and HTTPS. Note The sample code snippets used in this section come from the WS-SecurityPolicy integration test in the source tree of Quarkus CXF 1.1.1.1. Client SSL configuration If your client is going to communicate with a server whose SSL certificate is not trusted by the client's operating system, then you need to set up a custom trust store for your client. Tools like openssl or Java keytool are commonly used for creating and maintaining truststores. We have examples for both tools in the Quarkus CXF source tree: Create truststore with Java 'keytool' (wrapped by a Maven plugin) Create truststore with openssl Once you have prepared the trust store, you need to configure your client to use it. 1.1.1.1.1. Set the client trust store in application.properties This is the easiest way to set the client trust store. The key role is played by the following properties: quarkus.cxf.client."client-name".trust-store quarkus.cxf.client."client-name".trust-store-type quarkus.cxf.client."client-name".trust-store-password Here is an example: application.properties # Client side SSL quarkus.cxf.client.hello.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/hello quarkus.cxf.client.hello.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService 1 quarkus.cxf.client.hello.trust-store-type = pkcs12 2 quarkus.cxf.client.hello.trust-store = client-truststore.pkcs12 quarkus.cxf.client.hello.trust-store-password = client-truststore-password 1 pkcs12 and jks are two commonly used keystore formats. PKCS12 is the default Java keystore format since Java 9. We recommend using PKCS12 rather than JKS, because it offers stronger cryptographic algorithms, it is extensible, standardized, language-neutral and widely supported. 2 The referenced client-truststore.pkcs12 file has to be available either in the classpath or in the file system. 1.1.1.2. Server SSL configuration To make your services available over the HTTPS protocol, you need to set up server keystore in the first place. The server SSL configuration is driven by Vert.x, the HTTP layer of Quarkus. {link-quarkus-docs-base}/http-reference#ssl[Quarkus HTTP guide] provides the information about the configuration options. Here is a basic example: application.properties # Server side SSL quarkus.tls.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.key-store.p12.password = localhost-keystore-password quarkus.tls.key-store.p12.alias = localhost quarkus.tls.key-store.p12.alias-password = localhost-keystore-password 1.1.1.3. Mutual TLS (mTLS) authentication So far, we have explained the simple or single-sided case where only the server proves its identity through an SSL certificate and the client has to be set up to trust that certificate. Mutual TLS authentication goes by letting also the client prove its identity using the same means of public key cryptography. Hence, for the Mutual TLS (mTLS) authentication, in addition to setting up the server keystore and client truststore as described above, you need to set up the keystore on the client side and the truststore on the server side. The tools for creating and maintaining the stores are the same and the configuration properties to use are pretty much analogous to the ones used in the Simple TLS case. The mTLS integration test in the Quarkus CXF source tree can serve as a good starting point. The keystores and truststores are created with openssl (or alternatively with Java Java keytool ) Here is the application.properties file: application.properties # Server keystore for Simple TLS quarkus.tls.localhost-pkcs12.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.localhost-pkcs12.key-store.p12.password = localhost-keystore-password quarkus.tls.localhost-pkcs12.key-store.p12.alias = localhost quarkus.tls.localhost-pkcs12.key-store.p12.alias-password = localhost-keystore-password # Server truststore for Mutual TLS quarkus.tls.localhost-pkcs12.trust-store.p12.path = localhost-truststore.pkcs12 quarkus.tls.localhost-pkcs12.trust-store.p12.password = localhost-truststore-password # Select localhost-pkcs12 as the TLS configuration for the HTTP server quarkus.http.tls-configuration-name = localhost-pkcs12 # Do not allow any clients which do not prove their indentity through an SSL certificate quarkus.http.ssl.client-auth = required # CXF service quarkus.cxf.endpoint."/mTls".implementor = io.quarkiverse.cxf.it.auth.mtls.MTlsHelloServiceImpl # CXF client with a properly set certificate for mTLS quarkus.cxf.client.mTls.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/mTls quarkus.cxf.client.mTls.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService quarkus.cxf.client.mTls.key-store = target/classes/client-keystore.pkcs12 quarkus.cxf.client.mTls.key-store-type = pkcs12 quarkus.cxf.client.mTls.key-store-password = client-keystore-password quarkus.cxf.client.mTls.key-password = client-keystore-password quarkus.cxf.client.mTls.trust-store = target/classes/client-truststore.pkcs12 quarkus.cxf.client.mTls.trust-store-type = pkcs12 quarkus.cxf.client.mTls.trust-store-password = client-truststore-password # Include the keystores in the native executable quarkus.native.resources.includes = *.pkcs12,*.jks 1.1.1.4. Enforce SSL through WS-SecurityPolicy The requirement for the clients to connect through HTTPS can be defined in a policy. The functionality is provided by quarkus-cxf-rt-ws-security extension. Here is an example of a policy file: https-policy.xml <?xml version="1.0" encoding="UTF-8"?> <wsp:Policy wsp:Id="HttpsSecurityServicePolicy" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <wsp:ExactlyOne> <wsp:All> <sp:TransportBinding> <wsp:Policy> <sp:TransportToken> <wsp:Policy> <sp:HttpsToken RequireClientCertificate="false" /> </wsp:Policy> </sp:TransportToken> <sp:IncludeTimestamp /> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic128 /> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:TransportBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> The policy has to be referenced from a service endpoint interface (SEI): HttpsPolicyHelloService.java package io.quarkiverse.cxf.it.security.policy; import jakarta.jws.WebMethod; import jakarta.jws.WebService; import org.apache.cxf.annotations.Policy; /** * A service implementation with a transport policy set */ @WebService(serviceName = "HttpsPolicyHelloService") @Policy(placement = Policy.Placement.BINDING, uri = "https-policy.xml") public interface HttpsPolicyHelloService extends AbstractHelloService { @WebMethod @Override public String hello(String text); } With this setup in place, any request delivered over HTTP will be rejected by the PolicyVerificationInInterceptor : ERROR [org.apa.cxf.ws.pol.PolicyVerificationInInterceptor] Inbound policy verification failed: These policy alternatives can not be satisfied: {http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}TransportBinding: TLS is not enabled ... 1.1.2. Authentication and authorization Note The sample code snippets shown in this section come from the Client and server integration test in the source tree of Quarkus CXF. You may want to use it as a runnable example. 1.1.2.1. Client HTTP basic authentication Use the following client configuration options provided by quarkus-cxf extension to pass the username and password for HTTP basic authentication: quarkus.cxf.client."client-name".username quarkus.cxf.client."client-name".password Here is an example: application.properties quarkus.cxf.client.basicAuth.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuth.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth quarkus.cxf.client.basicAuth.username = bob quarkus.cxf.client.basicAuth.password = bob234 1.1.2.1.1. Accessing WSDL protected by basic authentication By default, the clients created by Quarkus CXF do not send the Authorization header, unless you set the quarkus.cxf.client."client-name".secure-wsdl-access to true : application.properties quarkus.cxf.client.basicAuthSecureWsdl.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuthSecureWsdl.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuthSecureWsdl quarkus.cxf.client.basicAuthSecureWsdl.username = bob quarkus.cxf.client.basicAuthSecureWsdl.password = USD{client-server.bob.password} quarkus.cxf.client.basicAuthSecureWsdl.secure-wsdl-access = true 1.1.2.2. Mutual TLS (mTLS) authentication See the Mutual TLS (mTLS) authentication section in SSL, TLS and HTTPS guide. 1.1.2.3. Securing service endpoints The server-side authentication and authorization is driven by {link-quarkus-docs-base}/security-overview[Quarkus Security], especially when it comes to {link-quarkus-docs-base}/security-authentication-mechanisms[Authentication mechanisms] {link-quarkus-docs-base}/security-identity-providers[Identity providers] {link-quarkus-docs-base}/security-authorize-web-endpoints-reference[Role-based access control (RBAC)] There is a basic example in our Client and server integration test . Its key parts are: io.quarkus:quarkus-elytron-security-properties-file dependency as an Identity provider Basic authentication enabled and users with their roles configured in application.properties : application.properties quarkus.http.auth.basic = true quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.alice = alice123 quarkus.security.users.embedded.roles.alice = admin quarkus.security.users.embedded.users.bob = bob234 quarkus.security.users.embedded.roles.bob = app-user Role-based access control enfoced via @RolesAllowed annotation: BasicAuthHelloServiceImpl.java package io.quarkiverse.cxf.it.auth.basic; import jakarta.annotation.security.RolesAllowed; import jakarta.jws.WebService; import io.quarkiverse.cxf.it.HelloService; @WebService(serviceName = "HelloService", targetNamespace = HelloService.NS) @RolesAllowed("app-user") public class BasicAuthHelloServiceImpl implements HelloService { @Override public String hello(String person) { return "Hello " + person + "!"; } } 1.1.3. Authentication enforced by WS-SecurityPolicy You can enforce authentication through WS-SecurityPolicy, instead of Mutual TLS and Basic HTTP authentication for clients and services . To enforce authentication through WS-SecurityPolicy, follow these steps: Add a supporting tokens policy to an endpoint in the WSDL contract. On the server side, implement an authentication callback handler and associate it with the endpoint in application.properties or via environment variables. Credentials received from clients are authenticated by the callback handler. On the client side, provide credentials through either configuration in application.properties or environment variables. Alternatively, you can implement an authentication callback handler to pass the credentials. 1.1.3.1. Specifying an Authentication Policy If you want to enforce authentication on a service endpoint, associate a supporting tokens policy assertion with the relevant endpoint binding and specify one or more token assertions under it. There are several different kinds of supporting tokens policy assertions, whose XML element names all end with SupportingTokens (for example, SupportingTokens , SignedSupportingTokens , and so on). For a complete list, see the Supporting Tokens section of the WS-SecurityPolicy specification. 1.1.3.2. UsernameToken policy assertion example Tip The sample code snippets used in this section come from the WS-SecurityPolicy integration test in the source tree of Quarkus CXF. You may want to use it as a runnable example. The following listing shows an example of a policy that requires a WS-Security UsernameToken (which contains username/password credentials) to be included in the security header. username-token-policy.xml <?xml version="1.0" encoding="UTF-8"?> <wsp:Policy wsp:Id="UsernameTokenSecurityServicePolicy" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702" xmlns:sp13="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200802" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <wsp:ExactlyOne> <wsp:All> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken sp:IncludeToken="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient"> <wsp:Policy> <sp:WssUsernameToken11 /> <sp13:Created /> <sp13:Nonce /> </wsp:Policy> </sp:UsernameToken> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:ExactlyOne> </wsp:Policy> There are two ways how you can associate this policy file with a service endpoint: Reference the policy on the Service Endpoint Interface (SEI) like this: UsernameTokenPolicyHelloService.java @WebService(serviceName = "UsernameTokenPolicyHelloService") @Policy(placement = Policy.Placement.BINDING, uri = "username-token-policy.xml") public interface UsernameTokenPolicyHelloService extends AbstractHelloService { ... } Include the policy in your WSDL contract and reference it via PolicyReference element . When you have the policy in place, configure the credentials on the service endpoint and the client: application.properties # A service with a UsernameToken policy assertion quarkus.cxf.endpoint."/helloUsernameToken".implementor = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloServiceImpl quarkus.cxf.endpoint."/helloUsernameToken".security.callback-handler = #usernameTokenPasswordCallback # These properties are used in UsernameTokenPasswordCallback # and in the configuration of the helloUsernameToken below wss.user = cxf-user wss.password = secret # A client with a UsernameToken policy assertion quarkus.cxf.client.helloUsernameToken.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloUsernameToken quarkus.cxf.client.helloUsernameToken.service-interface = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloService quarkus.cxf.client.helloUsernameToken.security.username = USD{wss.user} quarkus.cxf.client.helloUsernameToken.security.password = USD{wss.password} In the above listing, usernameTokenPasswordCallback is a name of a @jakarta.inject.Named bean implementing javax.security.auth.callback.CallbackHandler . Quarkus CXF will lookup a bean with this name in the CDI container. Here is an example implementation of the bean: UsernameTokenPasswordCallback.java package io.quarkiverse.cxf.it.security.policy; import java.io.IOException; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.UnsupportedCallbackException; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.wss4j.common.ext.WSPasswordCallback; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped @Named("usernameTokenPasswordCallback") /* We refer to this bean by this name from application.properties */ public class UsernameTokenPasswordCallback implements CallbackHandler { /* These two configuration properties are set in application.properties */ @ConfigProperty(name = "wss.password") String password; @ConfigProperty(name = "wss.user") String user; @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { if (callbacks.length < 1) { throw new IllegalStateException("Expected a " + WSPasswordCallback.class.getName() + " at possition 0 of callbacks. Got array of length " + callbacks.length); } if (!(callbacks[0] instanceof WSPasswordCallback)) { throw new IllegalStateException( "Expected a " + WSPasswordCallback.class.getName() + " at possition 0 of callbacks. Got an instance of " + callbacks[0].getClass().getName() + " at possition 0"); } final WSPasswordCallback pc = (WSPasswordCallback) callbacks[0]; if (user.equals(pc.getIdentifier())) { pc.setPassword(password); } else { throw new IllegalStateException("Unexpected user " + user); } } } To test the whole setup, you can create a simple {link-quarkus-docs-base}/getting-started-testing[@QuarkusTest] : UsernameTokenTest.java package io.quarkiverse.cxf.it.security.policy; import org.assertj.core.api.Assertions; import org.junit.jupiter.api.Test; import io.quarkiverse.cxf.annotation.CXFClient; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class UsernameTokenTest { @CXFClient("helloUsernameToken") UsernameTokenPolicyHelloService helloUsernameToken; @Test void helloUsernameToken() { Assertions.assertThat(helloUsernameToken.hello("CXF")).isEqualTo("Hello CXF from UsernameToken!"); } } When running the test via mvn test -Dtest=UsernameTokenTest , you should see a SOAP message being logged with a Security header containing Username and Password : Log output of the UsernameTokenTest <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Header> <wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soap:mustUnderstand="1"> <wsse:UsernameToken xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" wsu:Id="UsernameToken-bac4f255-147e-42a4-aeec-e0a3f5cd3587"> <wsse:Username>cxf-user</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">secret</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">3uX15dZT08jRWFWxyWmfhg==</wsse:Nonce> <wsu:Created>2024-10-02T17:32:10.497Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soap:Header> <soap:Body> <ns2:hello xmlns:ns2="http://policy.security.it.cxf.quarkiverse.io/"> <arg0>CXF</arg0> </ns2:hello> </soap:Body> </soap:Envelope> 1.1.3.3. SAML v1 and v2 policy assertion examples The WS-SecurityPolicy integration test contains also analogous examples with SAML v1 and SAML v2 assertions. | [
"Client side SSL quarkus.cxf.client.hello.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/hello quarkus.cxf.client.hello.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService 1 quarkus.cxf.client.hello.trust-store-type = pkcs12 2 quarkus.cxf.client.hello.trust-store = client-truststore.pkcs12 quarkus.cxf.client.hello.trust-store-password = client-truststore-password",
"Server side SSL quarkus.tls.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.key-store.p12.password = localhost-keystore-password quarkus.tls.key-store.p12.alias = localhost quarkus.tls.key-store.p12.alias-password = localhost-keystore-password",
"Server keystore for Simple TLS quarkus.tls.localhost-pkcs12.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.localhost-pkcs12.key-store.p12.password = localhost-keystore-password quarkus.tls.localhost-pkcs12.key-store.p12.alias = localhost quarkus.tls.localhost-pkcs12.key-store.p12.alias-password = localhost-keystore-password Server truststore for Mutual TLS quarkus.tls.localhost-pkcs12.trust-store.p12.path = localhost-truststore.pkcs12 quarkus.tls.localhost-pkcs12.trust-store.p12.password = localhost-truststore-password Select localhost-pkcs12 as the TLS configuration for the HTTP server quarkus.http.tls-configuration-name = localhost-pkcs12 Do not allow any clients which do not prove their indentity through an SSL certificate quarkus.http.ssl.client-auth = required CXF service quarkus.cxf.endpoint.\"/mTls\".implementor = io.quarkiverse.cxf.it.auth.mtls.MTlsHelloServiceImpl CXF client with a properly set certificate for mTLS quarkus.cxf.client.mTls.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/mTls quarkus.cxf.client.mTls.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService quarkus.cxf.client.mTls.key-store = target/classes/client-keystore.pkcs12 quarkus.cxf.client.mTls.key-store-type = pkcs12 quarkus.cxf.client.mTls.key-store-password = client-keystore-password quarkus.cxf.client.mTls.key-password = client-keystore-password quarkus.cxf.client.mTls.trust-store = target/classes/client-truststore.pkcs12 quarkus.cxf.client.mTls.trust-store-type = pkcs12 quarkus.cxf.client.mTls.trust-store-password = client-truststore-password Include the keystores in the native executable quarkus.native.resources.includes = *.pkcs12,*.jks",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"HttpsSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:TransportBinding> <wsp:Policy> <sp:TransportToken> <wsp:Policy> <sp:HttpsToken RequireClientCertificate=\"false\" /> </wsp:Policy> </sp:TransportToken> <sp:IncludeTimestamp /> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic128 /> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:TransportBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>",
"package io.quarkiverse.cxf.it.security.policy; import jakarta.jws.WebMethod; import jakarta.jws.WebService; import org.apache.cxf.annotations.Policy; /** * A service implementation with a transport policy set */ @WebService(serviceName = \"HttpsPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"https-policy.xml\") public interface HttpsPolicyHelloService extends AbstractHelloService { @WebMethod @Override public String hello(String text); }",
"ERROR [org.apa.cxf.ws.pol.PolicyVerificationInInterceptor] Inbound policy verification failed: These policy alternatives can not be satisfied: {http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}TransportBinding: TLS is not enabled",
"quarkus.cxf.client.basicAuth.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuth.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth quarkus.cxf.client.basicAuth.username = bob quarkus.cxf.client.basicAuth.password = bob234",
"quarkus.cxf.client.basicAuthSecureWsdl.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuthSecureWsdl.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuthSecureWsdl quarkus.cxf.client.basicAuthSecureWsdl.username = bob quarkus.cxf.client.basicAuthSecureWsdl.password = USD{client-server.bob.password} quarkus.cxf.client.basicAuthSecureWsdl.secure-wsdl-access = true",
"quarkus.http.auth.basic = true quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.alice = alice123 quarkus.security.users.embedded.roles.alice = admin quarkus.security.users.embedded.users.bob = bob234 quarkus.security.users.embedded.roles.bob = app-user",
"package io.quarkiverse.cxf.it.auth.basic; import jakarta.annotation.security.RolesAllowed; import jakarta.jws.WebService; import io.quarkiverse.cxf.it.HelloService; @WebService(serviceName = \"HelloService\", targetNamespace = HelloService.NS) @RolesAllowed(\"app-user\") public class BasicAuthHelloServiceImpl implements HelloService { @Override public String hello(String person) { return \"Hello \" + person + \"!\"; } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"UsernameTokenSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:sp13=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200802\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <wsp:Policy> <sp:WssUsernameToken11 /> <sp13:Created /> <sp13:Nonce /> </wsp:Policy> </sp:UsernameToken> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>",
"@WebService(serviceName = \"UsernameTokenPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"username-token-policy.xml\") public interface UsernameTokenPolicyHelloService extends AbstractHelloService { }",
"A service with a UsernameToken policy assertion quarkus.cxf.endpoint.\"/helloUsernameToken\".implementor = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloServiceImpl quarkus.cxf.endpoint.\"/helloUsernameToken\".security.callback-handler = #usernameTokenPasswordCallback These properties are used in UsernameTokenPasswordCallback and in the configuration of the helloUsernameToken below wss.user = cxf-user wss.password = secret A client with a UsernameToken policy assertion quarkus.cxf.client.helloUsernameToken.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloUsernameToken quarkus.cxf.client.helloUsernameToken.service-interface = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloService quarkus.cxf.client.helloUsernameToken.security.username = USD{wss.user} quarkus.cxf.client.helloUsernameToken.security.password = USD{wss.password}",
"package io.quarkiverse.cxf.it.security.policy; import java.io.IOException; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.UnsupportedCallbackException; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.wss4j.common.ext.WSPasswordCallback; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped @Named(\"usernameTokenPasswordCallback\") /* We refer to this bean by this name from application.properties */ public class UsernameTokenPasswordCallback implements CallbackHandler { /* These two configuration properties are set in application.properties */ @ConfigProperty(name = \"wss.password\") String password; @ConfigProperty(name = \"wss.user\") String user; @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { if (callbacks.length < 1) { throw new IllegalStateException(\"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got array of length \" + callbacks.length); } if (!(callbacks[0] instanceof WSPasswordCallback)) { throw new IllegalStateException( \"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got an instance of \" + callbacks[0].getClass().getName() + \" at possition 0\"); } final WSPasswordCallback pc = (WSPasswordCallback) callbacks[0]; if (user.equals(pc.getIdentifier())) { pc.setPassword(password); } else { throw new IllegalStateException(\"Unexpected user \" + user); } } }",
"package io.quarkiverse.cxf.it.security.policy; import org.assertj.core.api.Assertions; import org.junit.jupiter.api.Test; import io.quarkiverse.cxf.annotation.CXFClient; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class UsernameTokenTest { @CXFClient(\"helloUsernameToken\") UsernameTokenPolicyHelloService helloUsernameToken; @Test void helloUsernameToken() { Assertions.assertThat(helloUsernameToken.hello(\"CXF\")).isEqualTo(\"Hello CXF from UsernameToken!\"); } }",
"<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Header> <wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\" soap:mustUnderstand=\"1\"> <wsse:UsernameToken xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" wsu:Id=\"UsernameToken-bac4f255-147e-42a4-aeec-e0a3f5cd3587\"> <wsse:Username>cxf-user</wsse:Username> <wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">secret</wsse:Password> <wsse:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">3uX15dZT08jRWFWxyWmfhg==</wsse:Nonce> <wsu:Created>2024-10-02T17:32:10.497Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soap:Header> <soap:Body> <ns2:hello xmlns:ns2=\"http://policy.security.it.cxf.quarkiverse.io/\"> <arg0>CXF</arg0> </ns2:hello> </soap:Body> </soap:Envelope>"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/quarkus_cxf_security_guide_for_red_hat_build_of_apache_camel/quarkus-cxf-reference-intro-quarkus-cxf-security-guide |
8.2.4. Creating the LVM Logical Volumes | 8.2.4. Creating the LVM Logical Volumes Create logical volumes with mount points such as / , /home/ , and swap space. Remember that /boot cannot be a logical volume. To add a logical volume, click the Add button in the Logical Volumes section. A dialog window as shown in Figure 8.8, "Creating a Logical Volume" appears. Figure 8.8. Creating a Logical Volume Repeat these steps for each volume group you want to create. Note You may want to leave some free space in the logical volume group so you can expand the logical volumes later. The default automatic configuration does not do this, but this manual configuration example does - approximately 1 GB is left as free space for future expansion. Figure 8.9. Pending Logical Volumes Click OK to apply the volume group and all associated logical volumes. The following figure shows the final manual configuration: Figure 8.10. Final Manual Configuration | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/manual_lvm_partitioning-creating_the_lvm_logical_volumes |
Chapter 14. Monitoring using qdstat | Chapter 14. Monitoring using qdstat The qdstat tool is a command-line tool for monitoring the status and performance of AMQ Interconnect router networks. 14.1. Syntax for using qdstat You can use qdstat with the following syntax: This specifies: An option for the type of information to view. One or more optional connection options to specify a router for which to view the information. If you do not specify a connection option, qdstat connects to the router listening on localhost and the default AMQP port (5672). The secure connection options if the router for which you want to view information only accepts secure connections. Additional resources For more information about qdstat , see the qdstat man page . 14.2. Commands for monitoring the router network You can use qdstat to view the status of routers on your router network. For example, you can view information about the attached links and configured addresses, available connections, and nodes in the router network. To... Use this command... Create a state dump containing all statistics for all routers A state dump shows the current operational state of the router network. If you run this command on an interior router, it displays the statistics for all interior routers. If you run the command on an edge router, it displays the statistics for only that edge router. Create a state dump containing a single statistic for all routers If you run this command on an interior router, it displays the statistic for all interior routers. If you run the command on an edge router, it displays the statistic for only that edge router. Create a state dump containing all statistics for a single router This command shows the statistics for the local router only. View general statistics for a router View a list of connections to a router View the AMQP links attached to a router You can view a list of AMQP links attached to the router from clients (sender/receiver), from or to other routers into the network, to other containers (for example, brokers), and from the tool itself. View known routers on the router network View the addresses known to a router View a router's autolinks View the status of a router's link routes View a router's policy global settings and statistics View a router's policy vhost settings View a router's policy vhost statistics View a router's vhostgroup settings View a router's memory consumption Additional resources For more information about the fields displayed by each qdstat command, see the qdstat man page . | [
"qdstat <option> [ <connection-options> ] [ <secure-connection-options> ]",
"qdstat --all-routers --all-entities",
"qdstat -l|-a|-c|--autolinks|--linkroutes|-g|-m --all-routers",
"qdstat --all-entities",
"qdstat -g [all-routers| <connection-options> ]",
"qdstat -c [all-routers| <connection-options> ]",
"qdstat -l [all-routers| <connection-options> ]",
"qdstat -n [all-routers| <connection-options> ]",
"qdstat -a [all-routers| <connection-options> ]",
"qdstat --autolinks [all-routers| <connection-options> ]",
"qdstat --linkroutes [all-routers| <connection-options> ]",
"qdstat --policy [all-routers| <connection-options> ]",
"qdstat --vhosts [all-routers| <connection-options> ]",
"qdstat --vhoststats [all-routers| <connection-options> ]",
"qdstat --vhostgroups [all-routers| <connection-options> ]",
"qdstat -m [all-routers| <connection-options> ]"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_amq_interconnect/monitoring-using-qdstat-router-rhel |
7.297. ghostscript | 7.297. ghostscript 7.297.1. RHBA-2013:0651 - ghostscript bug fix update Updated ghostscript packages that fix one bug are now available for Red Hat Enterprise Linux 6. The Ghostscript suite contains utilities for rendering PostScript and PDF documents. Ghostscript translates PostScript code to common, bitmap formats so that the code can be displayed or printed. Bug Fix BZ# 920251 Due to a bug in a function that copies CIDFontType 2 fonts, document conversion attempts sometimes caused the ps2pdf utility to terminate unexpectedly with a segmentation fault. A patch has been provided to address this bug so that the function now copies fonts properly and ps2pdf no longer crashes in the described scenario. Users of ghostscript are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ghostscript |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 0.0-28 Mon Oct 30 2017 Lenka Spackova Added information on changes in the ld linker behavior to Deprecated Functionality. Revision 0.0-27 Tue Feb 14 2017 Lenka Spackova Updated the Samba 4.1.0 description (Authentication and Interoperability). Revision 0.0-25 Mon Aug 08 2016 Lenka Spackova Added a note about limited support for Windows guest virtual machines to Deprecated Functionality. Revision 0.0-23 Fri Mar 13 2015 Milan Navratil Update of the Red Hat Enterprise Linux 7.0 Release Notes. Revision 0.0-21 Wed Jan 06 2015 Jiri Herrmann Update of the Red Hat Enterprise Linux 7.0 Release Notes. Revision 0.0-1 Thu Dec 11 2013 Eliska Slobodova Release of the Red Hat Enterprise Linux 7.0 Beta Release Notes. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/appe-red_hat_enterprise_linux-7.0_release_notes-revision_history |
Chapter 8. Networking | Chapter 8. Networking Multi-message send system call Red Hat Enterprise Linux 6.2 introduces the multi-message send system call which is the send version of the existing recvmmsg system call in Red Hat Enterprise Linux 6. The system call sendmmsg socket API looks like this: Transmit Packet Steering (XPS) Red Hat Enterprise Linux 6.2 includes Transmit Packet Steering (XPS) for multiqueue devices. XPS introduces more efficient transmission of network packets for multiqueue devices by specifically targeting the processor involved in sending the packet. XPS enables the selection of the transmit queue for packet transmission based on configuration. This is analogous to the receive-side functionality implemented in Red Hat Enterprise Linux 6.1 which allowed for processor selection based on the receive queue (RPS). XPS has shown to improve throughput by 20% to 30%. Traffic flooding for unregistered groups Previously, the bridge flooded packets to unregistered groups via all ports. However, this behavior is not desirable in environments where traffic to unregistered groups is always present. In Red Hat Enterprise Linux 6.2, traffic is only sent to unregistered groups via ports marked as router ports. To force flooding to any given port, mark that port as a router port. Stream Control Transmission Protocol (SCTP) Multihome support Red Hat Enterprise Linux 6.2 adds support for SCTP multihoming-the ability of nodes (that is, multi-home nodes) to be reached at several IP addresses. Tracepoints for UDP packet drop events In Red Hat Enterprise Linux 6.2, more tracepoints have been added for UDP packet drop events. These tracepoints provide a way to analyze the reasons why UDP packets are dropped. IPSet The IPSet feature in the kernel has been added to store multiple IP addresses or port numbers, and match them against a collection via iptables . TCP initial receive window default The TCP initial receive window default has been increased from 4 kB to 15 kB. The benefit of this increase is that more data (15 kB > payload > 4 kB) can now fit in the initial window. With a 4 kB setting (IW3), any payload larger than 4 kB would have to be broken into multiple transfers. TCP initial congestion window default In Red Hat Enterprise Linux 6.2, the TCP initial congestion window default is now set to 10 , according to RFC 5681 . Additionally, the initial-window code common to TCP and CCID-2 has been consolidated. GSO support on IPv6 GSO (Generic Segmentation Offload) support for the IPv6 forward path has been added, improving the performance of host to guest communication if GSO is enabled. vios-proxy vios-proxy is a stream-socket proxy for providing connectivity between a client on a virtual guest and a server on a Hypervisor host. Communication occurs over virtio-serial links. This feature is introduced as a Technology Preview in Red Hat Enterprise Linux 6.2. | [
"struct mmsghdr { struct msghdr msg_hdr; unsigned msg_len; }; ssize_t sendmmsg(int socket, struct mmsghdr *datagrams, int vlen, int flags);"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/networking |
Chapter 10. Configuring NVMe over fabrics using NVMe/RDMA | Chapter 10. Configuring NVMe over fabrics using NVMe/RDMA In an Non-volatile Memory ExpressTM (NVMeTM) over RDMA (NVMeTM/RDMA) setup, you configure an NVMe controller and an NVMe initiator. 10.1. Setting up an NVMe/RDMA controller using configfs You can configure an Non-volatile Memory ExpressTM (NVMeTM) over RDMA (NVMeTM/RDMA) controller by using configfs . Prerequisites Verify that you have a block device to assign to the nvmet subsystem. Procedure Create the nvmet-rdma subsystem: Replace testnqn with the subsystem name. Allow any host to connect to this controller: Configure a namespace: Replace 10 with the namespace number Set a path to the NVMe device: Enable the namespace: Create a directory with an NVMe port: Display the IP address of mlx5_ib0 : Set the transport address for the controller: Set RDMA as the transport type: Set the address family for the port: Create a soft link: Verification Verify that the NVMe controller is listening on the given port and ready for connection requests: Additional resources nvme(1) man page on your system 10.2. Setting up the NVMe/RDMA controller using nvmetcli You can configure the Non-volatile Memory ExpressTM (NVMeTM) over RDMA (NVMeTM/RDMA) controller by using the nvmetcli utility. The nvmetcli utility provides a command line and an interactive shell option. Prerequisites Verify that you have a block device to assign to the nvmet subsystem. Execute the following nvmetcli operations as a root user. Procedure Install the nvmetcli package: Download the rdma.json file: Edit the rdma.json file and change the traddr value to 172.31.0.202 . Setup the controller by loading the NVMe controller configuration file: Note If the NVMe controller configuration file name is not specified, the nvmetcli uses the /etc/nvmet/config.json file. Verification Verify that the NVMe controller is listening on the given port and ready for connection requests: Optional: Clear the current NVMe controller: Additional resources nvmetcli and nvme(1) man pages on your system 10.3. Configuring an NVMe/RDMA host You can configure a Non-volatile Memory ExpressTM (NVMeTM) over RDMA (NVMeTM/RDMA) host by using the NVMe management command-line interface ( nvme-cli ) tool. Procedure Install the nvme-cli tool: Load the nvme-rdma module if it is not loaded: Discover available subsystems on the NVMe controller: Connect to the discovered subsystems: Replace testnqn with the NVMe subsystem name. Replace 172.31.0.202 with the controller IP address. Replace 4420 with the port number. Verification List the NVMe devices that are currently connected: Optional: Disconnect from the controller: Additional resources nvme(1) man page on your system 10.4. steps Enabling multipathing on NVMe devices | [
"modprobe nvmet-rdma mkdir /sys/kernel/config/nvmet/subsystems/ testnqn cd /sys/kernel/config/nvmet/subsystems/ testnqn",
"echo 1 > attr_allow_any_host",
"mkdir namespaces/ 10 cd namespaces/ 10",
"echo -n /dev/ nvme0n1 > device_path",
"echo 1 > enable",
"mkdir /sys/kernel/config/nvmet/ports/ 1 cd /sys/kernel/config/nvmet/ports/ 1",
"ip addr show mlx5_ib0 8: mlx5_ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 4092 qdisc mq state UP group default qlen 256 link/infiniband 00:00:06:2f:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:e7:0f:f6 brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff inet 172.31.0.202/24 brd 172.31.0.255 scope global noprefixroute mlx5_ib0 valid_lft forever preferred_lft forever inet6 fe80::e61d:2d03:e7:ff6/64 scope link noprefixroute valid_lft forever preferred_lft forever",
"echo -n 172.31.0.202 > addr_traddr",
"echo rdma > addr_trtype echo 4420 > addr_trsvcid",
"echo ipv4 > addr_adrfam",
"ln -s /sys/kernel/config/nvmet/subsystems/ testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ testnqn",
"dmesg | grep \"enabling port\" [ 1091.413648] nvmet_rdma: enabling port 1 (172.31.0.202:4420)",
"dnf install nvmetcli",
"wget http://git.infradead.org/users/hch/nvmetcli.git/blob_plain/0a6b088db2dc2e5de11e6f23f1e890e4b54fee64:/rdma.json",
"nvmetcli restore rdma.json",
"dmesg | tail -1 [ 4797.132647] nvmet_rdma: enabling port 2 (172.31.0.202:4420)",
"nvmetcli clear",
"dnf install nvme-cli",
"modprobe nvme-rdma",
"nvme discover -t rdma -a 172.31.0.202 -s 4420 Discovery Log Number of Records 1, Generation counter 2 =====Discovery Log Entry 0====== trtype: rdma adrfam: ipv4 subtype: nvme subsystem treq: not specified, sq flow control disable supported portid: 1 trsvcid: 4420 subnqn: testnqn traddr: 172.31.0.202 rdma_prtype: not specified rdma_qptype: connected rdma_cms: rdma-cm rdma_pkey: 0x0000",
"nvme connect -t rdma -n testnqn -a 172.31.0.202 -s 4420 lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home nvme0n1 cat /sys/class/nvme/nvme0/transport rdma",
"nvme list",
"nvme disconnect -n testnqn NQN:testnqn disconnected 1 controller(s) lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 464.8G 0 part ├─rhel_rdma--virt--03-root 253:0 0 50G 0 lvm / ├─rhel_rdma--virt--03-swap 253:1 0 4G 0 lvm [SWAP] └─rhel_rdma--virt--03-home 253:2 0 410.8G 0 lvm /home"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/configuring-nvme-over-fabrics-using-nvme-rdma_managing-storage-devices |
20.14. Displaying a URI for Connection to a Graphical Display | 20.14. Displaying a URI for Connection to a Graphical Display Running the virsh domdisplay command will output a URI that can then be used to connect to the graphical display of the guest virtual machine via VNC, SPICE, or RDP. The optional --type can be used to specify the graphical display type. If the argument --include-password is used, the SPICE channel password will be included in the URI. Example 20.36. How to display the URI for SPICE The following example displays the URI for SPICE, which is the graphical display that the virtual machine guest1 is using: For more information about connection URIs, see the libvirt upstream pages . | [
"virsh domdisplay --type spice guest1 spice://192.0.2.1:5900"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Domain_Commands-Displaying_a_URI_for_connection_to_a_graphical_display |
Chapter 4. OpenShift Data Foundation installation overview | Chapter 4. OpenShift Data Foundation installation overview OpenShift Data Foundation consists of multiple components managed by multiple operators. 4.1. Installed Operators When you install OpenShift Data Foundation from the Operator Hub, the following four separate Deployments are created: odf-operator : Defines the odf-operator Pod ocs-operator : Defines the ocs-operator Pod which runs processes for ocs-operator and its metrics-exporter in the same container. rook-ceph-operator : Defines the rook-ceph-operator Pod. mcg-operator : Defines the mcg-operator Pod. These operators run independently and interact with each other by creating customer resources (CRs) watched by the other operators. The ocs-operator is primarily responsible for creating the CRs to configure Ceph storage and Multicloud Object Gateway. The mcg-operator sometimes creates Ceph volumes for use by its components. 4.2. OpenShift Container Storage initialization The OpenShift Data Foundation bundle also defines an external plugin to the OpenShift Container Platform Console, adding new screens and functionality not otherwise available in the Console. This plugin runs as a web server in the odf-console-plugin Pod, which is managed by a Deployment created by the OLM at the time of installation. The ocs-operator automatically creates an OCSInitialization CR after it gets created. Only one OCSInitialization CR exists at any point in time. It controls the ocs-operator behaviors that are not restricted to the scope of a single StorageCluster , but only performs them once. When you delete the OCSInitialization CR, the ocs-operator creates it again and this allows you to re-trigger its initialization operations. The OCSInitialization CR controls the following behaviors: SecurityContextConstraints (SCCs) After the OCSInitialization CR is created, the ocs-operator creates various SCCs for use by the component Pods. Ceph Toolbox Deployment You can use the OCSInitialization to deploy the Ceph Toolbox Pod for the advanced Ceph operations. Rook-Ceph Operator Configuration This configuration creates the rook-ceph-operator-config ConfigMap that governs the overall configuration for rook-ceph-operator behavior. 4.3. Storage cluster creation The OpenShift Data Foundation operators themselves provide no storage functionality, and the desired storage configuration must be defined. After you install the operators, create a new StorageCluster , using either the OpenShift Container Platform console wizard or the CLI and the ocs-operator reconciles this StorageCluster . OpenShift Data Foundation supports a single StorageCluster per installation. Any StorageCluster CRs created after the first one is ignored by ocs-operator reconciliation. OpenShift Data Foundation allows the following three StorageCluster configurations: Internal In the Internal mode, all the components run containerized within the OpenShift Container Platform cluster and uses dynamically provisioned persistent volumes (PVs) created against the StorageClass specified by the administrator in the installation wizard. Internal-attached This mode is similar to the Internal mode but the administrator is required to define the local storage devices directly attached to the cluster nodes that the Ceph uses for its backing storage. Also, the administrator need to create the CRs that the local storage operator reconciles to provide the StorageClass . The ocs-operator uses this StorageClass as the backing storage for Ceph. External In this mode, Ceph components do not run inside the OpenShift Container Platform cluster instead connectivity is provided to an external OpenShift Container Storage installation for which the applications can create PVs. The other components run within the cluster as required. MCG Standalone: This mode facilitates the installation of a Multicloud Object Gateway system without an accompanying CephCluster. After a StorageCluster CR is found, ocs-operator validates it and begins to create subsequent resources to define the storage components. 4.3.1. Internal mode storage cluster Both internal and internal-attached storage clusters have the same setup process as follows: StorageClasses Create the storage classes that cluster applications use to create Ceph volumes. SnapshotClasses Create the volume snapshot classes that the cluster applications use to create snapshots of Ceph volumes. Ceph RGW configuration Create various Ceph object CRs to enable and provide access to the Ceph RGW object storage endpoint. Ceph RBD Configuration Create the CephBlockPool CR to enable RBD storage. CephFS Configuration Create the CephFilesystem CR to enable CephFS storage. Rook-Ceph Configuration Create the rook-config-override ConfigMap that governs the overall behavior of the underlying Ceph cluster. CephCluster Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator . For more information, see Rook-Ceph operator . NoobaaSystem Create the NooBaa CR to trigger reconciliation from mcg-operator . For more information, see MCG operator . Job templates Create OpenShift Template CRs that define Jobs to run administrative operations for OpenShift Container Storage. Quickstarts Create the QuickStart CRs that display the quickstart guides in the Web Console. 4.3.1.1. Cluster Creation After the ocs-operator creates the CephCluster CR, the rook-operator creates the Ceph cluster according to the desired configuration. The rook-operator configures the following components: Ceph mon daemons Three Ceph mon daemons are started on different nodes in the cluster. They manage the core metadata for the Ceph cluster and they must form a majority quorum. The metadata for each mon is backed either by a PV if it is in a cloud environment or a path on the local host if it is in a local storage device environment. Ceph mgr daemon This daemon is started and it gathers metrics for the cluster and report them to Prometheus. Ceph OSDs These OSDs are created according to the configuration of the storageClassDeviceSets . Each OSD consumes a PV that stores the user data. By default, Ceph maintains three replicas of the application data across different OSDs for high durability and availability using the CRUSH algorithm. CSI provisioners These provisioners are started for RBD and CephFS . When volumes are requested for the storage classes of OpenShift Container Storage, the requests are directed to the Ceph-CSI driver to provision the volumes in Ceph. CSI volume plugins and CephFS The CSI volume plugins for RBD and CephFS are started on each node in the cluster. The volume plugin needs to be running wherever the Ceph volumes are required to be mounted by the applications. After the CephCluster CR is configured, Rook reconciles the remaining Ceph CRs to complete the setup: CephBlockPool The CephBlockPool CR provides the configuration for Rook operator to create Ceph pools for RWO volumes. CephFilesystem The CephFilesystem CR instructs the Rook operator to configure a shared file system with CephFS, typically for RWX volumes. The CephFS metadata server (MDS) is started to manage the shared volumes. CephObjectStore The CephObjectStore CR instructs the Rook operator to configure an object store with the RGW service CephObjectStoreUser CR The CephObjectStoreUser CR instructs the Rook operator to configure an object store user for NooBaa to consume, publishing access/private key as well as the CephObjectStore endpoint. The operator monitors the Ceph health to ensure that storage platform remains healthy. If a mon daemon goes down for too long a period (10 minutes), Rook starts a new mon in its place so that the full quorum can be fully restored. When the ocs-operator updates the CephCluster CR, Rook immediately responds to the requested changes to update the cluster configuration. 4.3.1.2. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.2. External mode storage cluster For external storage clusters, ocs-operator follows a slightly different setup process. The ocs-operator looks for the existence of the rook-ceph-external-cluster-details ConfigMap , which must be created by someone else, either the administrator or the Console. For information about how to create the ConfigMap , see Creating an OpenShift Data Foundation Cluster for external mode . The ocs-operator then creates some or all of the following resources, as specified in the ConfigMap : External Ceph Configuration A ConfigMap that specifies the endpoints of the external mons . External Ceph Credentials Secret A Secret that contains the credentials to connect to the external Ceph instance. External Ceph StorageClasses One or more StorageClasses to enable the creation of volumes for RBD, CephFS, and/or RGW. Enable CephFS CSI Driver If a CephFS StorageClass is specified, configure rook-ceph-operator to deploy the CephFS CSI Pods. Ceph RGW Configuration If an RGW StorageClass is specified, create various Ceph Object CRs to enable and provide access to the Ceph RGW object storage endpoint. After creating the resources specified in the ConfigMap , the StorageCluster creation process proceeds as follows: CephCluster Create the CephCluster CR to trigger Ceph reconciliation from rook-ceph-operator (see subsequent sections). SnapshotClasses Create the SnapshotClasses that applications use to create snapshots of Ceph volumes. NoobaaSystem Create the NooBaa CR to trigger reconciliation from noobaa-operator (see subsequent sections). QuickStarts : Create the Quickstart CRs that display the quickstart guides in the Console. 4.3.2.1. Cluster Creation The Rook operator performs the following operations when the CephCluster CR is created in external mode: The operator validates that a connection is available to the remote Ceph cluster. The connection requires mon endpoints and secrets to be imported into the local cluster. The CSI driver is configured with the remote connection to Ceph. The RBD and CephFS provisioners and volume plugins are started similarly to the CSI driver when configured in internal mode, the connection to Ceph happens to be external to the OpenShift cluster. Periodically watch for monitor address changes and update the Ceph-CSI configuration accordingly. 4.3.2.2. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.3. MCG Standalone StorageCluster In this mode, no CephCluster is created. Instead a NooBaa system CR is created using default values to take advantage of pre-existing StorageClasses in the OpenShift Container Platform. dashboards. 4.3.3.1. NooBaa System creation When a NooBaa system is created, the mcg-operator reconciles the following: Default BackingStore Depending on the platform that OpenShift Container Platform and OpenShift Data Foundation are deployed on, a default backing store resource is created so that buckets can use it for their placement policy. The different options are as follows: Amazon Web Services (AWS) deployment The mcg-operator uses the CloudCredentialsOperator (CCO) to mint credentials in order to create a new AWS::S3 bucket and creates a BackingStore on top of that bucket. Microsoft Azure deployment The mcg-operator uses the CCO to mint credentials in order to create a new Azure Blob and creates a BackingStore on top of that bucket. Google Cloud Platform (GCP) deployment The mcg-operator uses the CCO to mint credentials in order to create a new GCP bucket and will create a BackingStore on top of that bucket. On-prem deployment If RGW exists, the mcg-operator creates a new CephUser and a new bucket on top of RGW and create a BackingStore on top of that bucket. None of the previously mentioned deployments are applicable The mcg-operator creates a pv-pool based on the default storage class and creates a BackingStore on top of that bucket. Default BucketClass A BucketClass with a placement policy to the default BackingStore is created. NooBaa pods The following NooBaa pods are created and started: Database (DB) This is a Postgres DB holding metadata, statistics, events, and so on. However, it does not hold the actual data being stored. Core This is the pod that handles configuration, background processes, metadata management, statistics, and so on. Endpoints These pods perform the actual I/O-related work such as deduplication and compression, communicating with different services to write and read data, and so on. The endpoints are integrated with the HorizonalPodAutoscaler and their number increases and decreases according to the CPU usage observed on the existing endpoint pods. Route A Route for the NooBaa S3 interface is created for applications that uses S3. Service A Service for the NooBaa S3 interface is created for applications that uses S3. 4.3.3.2. StorageSystem Creation As a part of the StorageCluster creation, odf-operator automatically creates a corresponding StorageSystem CR, which exposes the StorageCluster to the OpenShift Data Foundation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_installation_overview |
Chapter 8. Host Management and Monitoring Using RHEL Web Console | Chapter 8. Host Management and Monitoring Using RHEL Web Console RHEL web console is an interactive web interface that you can use to perform actions and monitor Red Hat Enterprise Linux hosts. You can enable a remote-execution feature to integrate Satellite with RHEL web console. When you install RHEL web console on a host that you manage with Satellite, you can view the RHEL web console dashboards of that host from within the Satellite web UI. You can also use the features that are integrated with RHEL web console, for example, Red Hat Image Builder. 8.1. Enabling RHEL Web Console on Satellite By default, RHEL web console integration is disabled in Satellite. If you want to access RHEL web console features for your hosts from within Satellite, you must first enable RHEL web console integration on Satellite Server. Procedure On Satellite Server, run satellite-installer with the --enable-foreman-plugin-remote-execution-cockpit option: 8.2. Managing and Monitoring Hosts Using RHEL Web Console You can access the RHEL web console web UI through the Satellite web UI and use the functionality to manage and monitor hosts in Satellite. Prerequisites RHEL web console is enabled in Satellite. RHEL web console is installed on the host that you want to view: For Red Hat Enterprise Linux 8, see Installing the web console in the Managing systems using the RHEL 8 web console guide. For Red Hat Enterprise Linux 7, see Installing the web console in the Managing systems using the RHEL 7 web console guide. Satellite or Capsule can authenticate to the host with SSH keys. For more information, Section 12.8, "Distributing SSH Keys for Remote Execution" . Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the host that you want to manage and monitor with RHEL web console. In the upper right of the host window, click Web Console . You can now access the full range of features available for host monitoring and management, for example, Red Hat Image Builder, through the RHEL web console. For more information about getting started with Red Hat web console, see the Managing systems using the RHEL 8 web console guide or the Managing systems using the RHEL 7 web console guide. For more information about using Red Hat Image Builder through RHEL web console, see Accessing Image Builder GUI in the RHEL 8 web console or Accessing Image Builder GUI in the RHEL 7 web console . 8.3. Disabling RHEL Web Console on Satellite Perform the following procedure if you want to disable RHEL web console on Satellite. Procedure Run this satellite-installer command: In the Satellite web UI, navigate to Administer > Settings and click the Remote execution tab. In the Cockpit URL row, erase the setting under Value and click Submit . This removes the Web Console button from the Satellite web UI. Uninstall the RHEL web console package from Satellite: | [
"satellite-installer --enable-foreman-plugin-remote-execution-cockpit",
"satellite-installer --no-enable-foreman-plugin-remote-execution-cockpit",
"remove rubygem-foreman_remote_execution-cockpit"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_hosts/host_management_and_monitoring_using_cockpit_managing-hosts |
About this guide | About this guide Warning This guide is automatically generated from comments embedded in the upstream OpenStack source code. Therefore, not all of the parameters listed in this guide are supported in a production environment. To locate information about actual supported parameters, see the relevant guide that describes your supported use-case. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/overcloud_parameters/about_this_guide |
Virtualization Tuning and Optimization Guide | Virtualization Tuning and Optimization Guide Red Hat Enterprise Linux 7 Using KVM performance features for host systems and virtualized guests on RHEL Jiri Herrmann Red Hat Customer Content Services [email protected] Yehuda Zimmerman Red Hat Customer Content Services [email protected] Dayle Parker Red Hat Customer Content Services Scott Radvan Red Hat Customer Content Services Red Hat Subject Matter Experts | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/index |
8.2.5. Storage of Backups | 8.2.5. Storage of Backups Once the backups are complete, what happens then? The obvious answer is that the backups must be stored. However, what is not so obvious is exactly what should be stored -- and where. To answer these questions, we must first consider under what circumstances the backups are to be used. There are three main situations: Small, ad-hoc restoration requests from users Massive restorations to recover from a disaster Archival storage unlikely to ever be used again Unfortunately, there are irreconcilable differences between numbers 1 and 2. When a user accidentally deletes a file, they would like it back immediately. This implies that the backup media is no more than a few steps away from the system to which the data is to be restored. In the case of a disaster that necessitates a complete restoration of one or more computers in your data center, if the disaster was physical in nature, whatever it was that destroyed your computers would also have destroyed the backups sitting a few steps away from the computers. This would be a very bad state of affairs. Archival storage is less controversial; since the chances that it will ever be used for any purpose are rather low, if the backup media was located miles away from the data center there would be no real problem. The approaches taken to resolve these differences vary according to the needs of the organization involved. One possible approach is to store several days worth of backups on-site; these backups are then taken to more secure off-site storage when newer daily backups are created. Another approach would be to maintain two different pools of media: A data center pool used strictly for ad-hoc restoration requests An off-site pool used for off-site storage and disaster recovery Of course, having two pools implies the need to run all backups twice or to make a copy of the backups. This can be done, but double backups can take too long, and copying requires multiple backup drives to process the copies (and probably a dedicated system to actually perform the copy). The challenge for a system administrator is to strike a balance that adequately meets everyone's needs, while ensuring that the backups are available for the worst of situations. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-disaster-backups-storage |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/configuring_authentication_for_red_hat_satellite_users/providing-feedback-on-red-hat-documentation_authentication |
Chapter 1. Preparing to install on a single node | Chapter 1. Preparing to install on a single node 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. About OpenShift on a single node You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special Ignition configuration file. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability. Important The use of OpenShiftSDN with single-node OpenShift is not supported. OVN-Kubernetes is the default network plugin for single-node OpenShift deployments. 1.3. Requirements for installing OpenShift on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements: Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation. Note For the ppc64le platform, the host should prepare the ISO, but does not need to create the USB boot drive. The ISO can be mounted to PowerVM directly. Note ISO is not required for IBM Z(R) installations. CPU Architecture: Installing OpenShift Container Platform on a single node supports x86_64 , arm64 , ppc64le , and s390x CPU architectures. Supported platforms: Installing OpenShift Container Platform on a single node is supported on bare metal and Certified third-party hypervisors . In most cases, you must specify the platform.none: {} parameter in the install-config.yaml configuration file. The following list shows the only exceptions and the corresponding parameter to specify in the install-config.yaml configuration file: Amazon Web Services (AWS), where you use platform=aws Google Cloud Platform (GCP), where you use platform=gcp Microsoft Azure, where you use platform=azure Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 1.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 8 vCPUs 16 GB of RAM 120 GB Note One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core: (threads per core x cores) x sockets = vCPUs Adding Operators during the installation process might increase the minimum resource requirements. The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Note BMC is not supported on IBM Z(R) and IBM Power(R). Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN): Table 1.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by both clients external to the cluster and within the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by both clients external to the cluster and within the cluster. Important Without persistent IP addresses, communications between the apiserver and etcd might fail. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_a_single_node/preparing-to-install-sno |
6.2. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest | 6.2. I/O Scheduling with Red Hat Enterprise Linux as a Virtualization Guest You can use I/O scheduling on a Red Hat Enterprise Linux guest virtual machine, regardless of the hypervisor on which the guest is running. The following is a list of benefits and issues that should be considered: Red Hat Enterprise Linux guests often benefit greatly from using the noop scheduler. The scheduler merges small requests from the guest operating system into larger requests before sending the I/O to the hypervisor. This enables the hypervisor to process the I/O requests more efficiently, which can significantly improve the guest's I/O performance. Depending on the workload I/O and how storage devices are attached, schedulers like deadline can be more beneficial than noop . Red Hat recommends performance testing to verify which scheduler offers the best performance impact. Guests that use storage accessed by iSCSI, SR-IOV, or physical device passthrough should not use the noop scheduler. These methods do not allow the host to optimize I/O requests to the underlying physical device. Note In virtualized environments, it is sometimes not beneficial to schedule I/O on both the host and guest layers. If multiple guests use storage on a file system or block device managed by the host operating system, the host may be able to schedule I/O more efficiently because it is aware of requests from all guests. In addition, the host knows the physical layout of storage, which may not map linearly to the guests' virtual storage. All scheduler tuning should be tested under normal operating conditions, as synthetic benchmarks typically do not accurately compare performance of systems using shared resources in virtual environments. 6.2.1. Configuring the I/O Scheduler for Red Hat Enterprise Linux 7 The default scheduler used on a Red Hat Enterprise Linux 7 system is deadline . However, on a Red Hat Enterprise Linux 7 guest machine, it may be beneficial to change the scheduler to noop , by doing the following: In the /etc/default/grub file, change the elevator=deadline string on the GRUB_CMDLINE_LINUX line to elevator=noop . If there is no elevator= string, add elevator=noop at the end of the line. The following shows the /etc/default/grub file after a successful change. Rebuild the /boot/grub2/grub.cfg file. On a BIOS-based machine: On an UEFI-based machine: | [
"cat /etc/default/grub [...] GRUB_CMDLINE_LINUX=\"crashkernel=auto rd.lvm.lv=vg00/lvroot rhgb quiet elevator=noop\" [...]",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-Virtualization_Tuning_Optimization_Guide-IO-Scheduler-Guest |
Chapter 5. Kernel Parameters | Chapter 5. Kernel Parameters You can modify the kernel behaviour with kernel parameters. Parameter Description BridgeNfCallArpTables Configures sysctl net.bridge.bridge-nf-call-arptables key. The default value is 1 . BridgeNfCallIp6Tables Configures sysctl net.bridge.bridge-nf-call-ip6tables key. The default value is 1 . BridgeNfCallIpTables Configures sysctl net.bridge.bridge-nf-call-iptables key. The default value is 1 . ExtraKernelModules Hash of extra kernel modules to load. ExtraKernelPackages List of extra kernel related packages to install. ExtraSysctlSettings Hash of extra sysctl settings to apply. FsAioMaxNumber The kernel allocates aio memory on demand, and this number limits the number of parallel aio requests; the only drawback of a larger limit is that a malicious guest could issue parallel requests to cause the kernel to set aside memory. Set this number at least as large as 128 * (number of virtual disks on the host) Libvirt uses a default of 1M requests to allow 8k disks, with at most 64M of kernel memory if all disks hit an aio request at the same time. The default value is 0 . InotifyInstancesMax Configures sysctl fs.inotify.max_user_instances key. The default value is 1024 . InotifyIntancesMax Configures sysctl fs.inotify.max_user_instances key. The default value is 1024 . KernelDisableIPv6 Configures sysctl net.ipv6.{default/all}.disable_ipv6 keys. The default value is 0 . KernelIpForward Configures net.ipv4.ip_forward key. The default value is 1 . KernelIpNonLocalBind Configures net.ipv{4,6}.ip_nonlocal_bind key. The default value is 1 . KernelIpv4ConfAllRpFilter Configures the net.ipv4.conf.all.rp_filter key. The default value is 1 . KernelIpv6ConfAllForwarding Configures the net.ipv6.conf.all.forwarding key. The default value is 0 . KernelPidMax Configures sysctl kernel.pid_max key. The default value is 1048576 . NeighbourGcThreshold1 Configures sysctl net.ipv4.neigh.default.gc_thresh1 value. This is the minimum number of entries to keep in the ARP cache. The garbage collector will not run if there are fewer than this number of entries in the cache. The default value is 1024 . NeighbourGcThreshold2 Configures sysctl net.ipv4.neigh.default.gc_thresh2 value. This is the soft maximum number of entries to keep in the ARP cache. The garbage collector will allow the number of entries to exceed this for 5 seconds before collection will be performed. The default value is 2048 . NeighbourGcThreshold3 Configures sysctl net.ipv4.neigh.default.gc_thresh3 value. This is the hard maximum number of entries to keep in the ARP cache. The garbage collector will always run if there are more than this number of entries in the cache. The default value is 4096 . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/overcloud_parameters/ref_kernel-parameters_overcloud_parameters |
Chapter 4. Removing a RHOSO deployment from the RHOCP environment | Chapter 4. Removing a RHOSO deployment from the RHOCP environment You can remove a Red Hat OpenStack Services on OpenShift (RHOSO) deployment from the Red Hat OpenShift Container Platform (RHOCP) environment if you no longer require the RHOSO deployment. You can also remove any Operators that were installed on your RHOCP cluster for use only by the RHOSO deployment. Note To create a new RHOSO deployment with the same data plane nodes after RHOSO is removed from the RHOCP environment, you must reprovision the nodes. 4.1. Removing RHOSO resources from the RHOCP environment You can remove the resources created for your Red Hat OpenStack Services on OpenShift (RHOSO) deployment from the Red Hat OpenShift Container Platform (RHOCP) environment if you no longer require the RHOSO deployment. Prerequisites You are logged on to a workstation that has access to the Red Hat OpenShift Container Platform (RHOCP) cluster as a user with cluster-admin privileges. Procedure Delete all the OpenStackDataPlaneDeployment objects in the RHOSO namespace: Delete all the OpenStackDataPlaneNodeSet objects in the RHOSO namespace: Delete all the OpenStackDataPlaneService objects in the RHOSO namespace: Delete the OpenStackControlPlane object from the RHOSO namespace: Confirm that all the deployment pods are deleted: Optional: Delete all Persistent Volume Claims (PVCs) from the RHOSO namespace: Tip Create a backup of your PVCs before deletion if you need to reuse them. For information about how to create a PVC backup, see the documentation for the storage backend used in your RHOSO environment. Optional: Release the PV to make it available for other applications: Delete the Secret objects that contain the certificates issued for the control plane services and the data plane nodes when the RHOSO environment was deployed: If the above commands return the message "No resources found" then the certificate secrets were automatically deleted when you deleted the control plane and data plane resources. For information about configuring the cert-manager Operator before RHOSO deployment to automatically delete the certificate secrets, see Deleting a TLS secret automatically upon Certificate removal in the RHOCP Security and Compliance guide. Verify that the resources have been deleted from the namespace: Optional: Delete the namespace: Note You do not need to delete the namespace if you plan to create a new RHOSO deployment in the same namespace. If deleting the namespace is stuck in the terminating state: Check if any remaining objects have a finalizer: Repeat the following command for each remaining object to remove the object finalizers and unblock deletion of each object: 4.2. Removing RHOSO Operators from the RHOCP cluster You can remove any Operators that were installed on your Red Hat OpenShift Container Platform (RHOCP) cluster for use only by the Red Hat OpenStack Services on OpenShift (RHOSO) deployment. For more information about how to remove an Operator and uninstall all the resources associated with the Operator, see Deleting Operators from a cluster in the RHOCP Operators guide. | [
"oc delete OpenStackDataPlaneDeployment --all -n openstack",
"oc delete OpenStackDataPlaneNodeSet --all -n openstack",
"oc delete OpenStackDataPlaneService --all -n openstack",
"oc delete openstackcontrolplane -l core.openstack.org/openstackcontrolplane -n openstack",
"oc get pods -n openstack",
"oc delete PersistentVolumeClaim -all -n openstack",
"oc patch PersistentVolume <pv_name> -p '{\"spec\":{\"claimRef\": null}}'",
"oc delete secret -l service-cert -n openstack oc delete secret -l ca-cert -n openstack oc delete secret -l osdp-service -n openstack",
"oc get all No resources found.",
"oc delete namespace openstack",
"oc get USD(oc api-resources|grep openstack.org|cut -d\" \" -f1 |paste -sd \",\" -),all -o custom-columns=Kind:.kind,Name:.metadata.name,Finalizers:.metadata.finalizers -n <namespace>",
"oc patch -n <namespace> <object-name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/maintaining_the_red_hat_openstack_services_on_openshift_deployment/assembly_removing-rhoso-deployment-from-rhocp-environment |
14.3. LVM References | 14.3. LVM References Use these sources to learn more about LVM. Installed Documentation rpm -qd lvm2 - This command shows all the documentation available from the lvm package, including man pages. lvm help - This command shows all LVM commands available. Useful Websites http://sources.redhat.com/lvm2 - LVM2 webpage, which contains an overview, link to the mailing lists, and more. http://tldp.org/HOWTO/LVM-HOWTO/ - LVM HOWTO from the Linux Documentation Project. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-lvm-additional-resources |
7.91. iprutils | 7.91. iprutils 7.91.1. RHBA-2013:0378 - iprutils bug fix and enhancement update An updated iprutils package that fixes several bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The iprutils package provides utilities to manage and configure SCSI devices that are supported by the IBM Power RAID SCSI storage device driver. Note The iprutils package has been upgraded to upstream version 2.3.12, which provides a number of bug fixes and enhancements over the version and adds support for the suspend/resume utility for IBM BlueHawk. (BZ# 822648 , BZ# 860532 , BZ#829761) Bug Fixes BZ#826907 Previously, showing disk details caused the iprconfig utility, which is used to configure Hardware RAID devices, to terminate unexpectedly. Now, disk details are shown properly and iprconfig no longer crashes. BZ#830982 Previously, in some situations, iprconfig failed to change the IOA asymmetric access mode if the saved mode in the configuration file located in the "/etc/ipr/" directory was different than the current mode. With this update, iprconfig sets the mode correctly and a warning message is returned when this inconsistency is detected. BZ#869751 Previously, iprutils showed the wrong disk platform location within the system location string when the "iprconfig -c show-details sgx" command was used. Now, the platform location for the hard disk is combined with the location of "secured easy setup" (SES) and the physical location slot number which prevents this error from occurring. Users of iprutils are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/iprutils |
Chapter 3. acl | Chapter 3. acl This chapter describes the commands under the acl command. 3.1. acl delete Delete ACLs for a secret or container as identified by its href. Usage: Table 3.1. Positional arguments Value Summary URI The uri reference for the secret or container. Table 3.2. Command arguments Value Summary -h, --help Show this help message and exit 3.2. acl get Retrieve ACLs for a secret or container by providing its href. Usage: Table 3.3. Positional arguments Value Summary URI The uri reference for the secret or container. Table 3.4. Command arguments Value Summary -h, --help Show this help message and exit Table 3.5. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 3.6. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 3.7. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 3.8. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 3.3. acl submit Submit ACL on a secret or container as identified by its href. Usage: Table 3.9. Positional arguments Value Summary URI The uri reference for the secret or container. Table 3.10. Command arguments Value Summary -h, --help Show this help message and exit --user [USERS], -u [USERS] Keystone userid(s) for acl. --project-access Flag to enable project access behavior. --no-project-access Flag to disable project access behavior. --operation-type {read}, -o {read} Type of barbican operation acl is set for Table 3.11. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 3.12. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 3.13. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 3.14. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 3.4. acl user add Add ACL users to a secret or container as identified by its href. Usage: Table 3.15. Positional arguments Value Summary URI The uri reference for the secret or container. Table 3.16. Command arguments Value Summary -h, --help Show this help message and exit --user [USERS], -u [USERS] Keystone userid(s) for acl. --project-access Flag to enable project access behavior. --no-project-access Flag to disable project access behavior. --operation-type {read}, -o {read} Type of barbican operation acl is set for Table 3.17. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 3.18. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 3.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 3.20. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 3.5. acl user remove Remove ACL users from a secret or container as identified by its href. Usage: Table 3.21. Positional arguments Value Summary URI The uri reference for the secret or container. Table 3.22. Command arguments Value Summary -h, --help Show this help message and exit --user [USERS], -u [USERS] Keystone userid(s) for acl. --project-access Flag to enable project access behavior. --no-project-access Flag to disable project access behavior. --operation-type {read}, -o {read} Type of barbican operation acl is set for Table 3.23. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 3.24. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 3.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 3.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack acl delete [-h] URI",
"openstack acl get [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] URI",
"openstack acl submit [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--user [USERS]] [--project-access | --no-project-access] [--operation-type {read}] URI",
"openstack acl user add [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--user [USERS]] [--project-access | --no-project-access] [--operation-type {read}] URI",
"openstack acl user remove [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--user [USERS]] [--project-access | --no-project-access] [--operation-type {read}] URI"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/acl |
Chapter 1. Service accounts | Chapter 1. Service accounts An account can be either a user account or a service account. A user account authenticates human users in your organization. A service account authenticates applications or services without human intervention. You create service accounts on Red Hat Hybrid Cloud Console for the following reasons: An application or service needs access to specific resources. The application or service needs to access resources without the need for human intervention. The application or service needs to access resources from multiple locations. You must use service accounts to connect to cloud service APIs on Red Hat Hybrid Cloud Console . Red Hat support for basic authentication ends on December 31, 2024 and will only permit token-based authentication after that date. Service accounts support token-based authentication. For more information about service account implementation, see Transition of Red Hat Hybrid Cloud Console APIs from basic authentication to token-based authentication via service accounts . Note APIs require an access grant token from Red Hat Single Sign-On. The token expires after 15 minutes (900 seconds). Repeat the process of obtaining an access token every 10 minutes (600 seconds) so that the token is rotated prior to expiration. RFC 6749, Section 4.1.4 Additional resources Update Your API Integration Transition of Red Hat Hybrid Cloud Console APIs from basic authentication to token-based authentication via service accounts | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/con-ciam-svc-acct-intro-creating-service-acct |
C.3. Application Restrictions | C.3. Application Restrictions There are aspects of virtualization that make it unsuitable for certain types of applications. Applications with high I/O throughput requirements should use KVM's paravirtualized drivers (virtio drivers) for fully-virtualized guests. Without the virtio drivers, certain applications may be unpredictable under heavy I/O loads. The following applications should be avoided due to high I/O requirements: kdump server netdump server You should carefully evaluate applications and tools that heavily utilize I/O or those that require real-time performance. Consider the virtio drivers or PCI device assignment for increased I/O performance. For more information on the virtio drivers for fully virtualized guests, see Chapter 5, KVM Paravirtualized (virtio) Drivers . For more information on PCI device assignment, see Chapter 16, Guest Virtual Machine Device Configuration . Applications suffer a small performance loss from running in virtualized environments. The performance benefits of virtualization through consolidating to newer and faster hardware should be evaluated against the potential application performance issues associated with using virtualization. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtualization_restrictions-application_restrictions |
Chapter 1. Upgrading by using the Operator | Chapter 1. Upgrading by using the Operator Upgrades through the Red Hat Advanced Cluster Security for Kubernetes (RHACS) Operator are performed automatically or manually, depending on the Update approval option you chose at installation. Follow these guidelines when upgrading: If the version for Central is earlier than 3.74, you must upgrade to 3.74 before upgrading to a 4.x version. For upgrading Central to version 3.74, see the upgrade documentation for version 3.74 . When upgrading Operator-based Central deployments from version 3.74, first ensure the Operator upgrade mode is set to Manual . Then, upgrade the Operator to version 4.0 following the procedure in the upgrade documentation for version 4.0 and ensure that Central is online. After the upgrade to version 4.0 is complete, Red Hat recommends upgrading Central to the latest version for full functionality. 1.1. Preparing to upgrade Before you upgrade the Red Hat Advanced Cluster Security for Kubernetes (RHACS) version, complete the following steps: If you are upgrading from version 3.74, verify that you are running the latest patch release version of the RHACS Operator 3.74. Backup your existing Central database. If the cluster you are upgrading contains the SecuredCluster custom resource (CR), change the collection method to CORE_BPF . For more information, see "Changing the collection method". 1.1.1. Changing the collection method If the cluster that you are upgrading contains the SecuredCluster CR, you must ensure that the per node collection setting is set to CORE_BPF before you upgrade. Procedure In the OpenShift Container Platform web console, go to the RHACS Operator page. In the top navigation menu, select Secured Cluster . Click the instance name, for example, stackrox-secured-cluster-services . Use one of the following methods to change the setting: In the Form view , under Per Node Settings Collector Settings Collection , select CORE_BPF . Click YAML to open the YAML editor and locate the spec.perNode.collector.collection attribute. If the value is KernelModule or EBPF , then change it to CORE_BPF . Click Save. Additional resources Updating installed Operators Backing up Red Hat Advanced Cluster Security for Kubernetes 1.2. Modifying Central custom resource The Central DB service requires persistent storage. If you have not configured a default storage class for the Central cluster that is an SSD or is high performance, you must update the Central custom resource to configure the storage class for the Central DB persistent volume claim (PVC). Note Skip this section if you have already configured a default storage class for Central. Procedure Update the central custom resource with the following configuration: 1 You must not change the value of IsEnabled to Enabled . 2 If this claim exists, your cluster uses the existing claim, otherwise it creates a new claim. 1.3. Modifying Central custom resource for external database Prerequisites You must have a database in your database instance that supports PostgreSQL 13 and a user with the following permissions: Connection rights to the database. Usage and Create on the schema. Select , Insert , Update , and Delete on all tables in the schema. Usage on all sequences in the schema. Procedure Create a password secret in the deployed namespace by using the OpenShift Container Platform web console or the terminal. On the OpenShift Container Platform web console, go to the Workloads Secrets page. Create a Key/Value secret with the key password and the value as the path of a plain text file containing the password for the superuser of the provisioned database. Or, run the following command in your terminal: USD oc create secret generic external-db-password \ 1 --from-file=password=<password.txt> 2 1 If you use Kubernetes, enter kubectl instead of oc . 2 Replace password.txt with the path of the file which has the plain text password. Go to the Red Hat Advanced Cluster Security for Kubernetes operator page in the OpenShift Container Platform web console. Select Central in the top navigation bar and select the instance you want to connect to the database. Go to the YAML editor view. For db.passwordSecret.name specify the referenced secret that you created in earlier steps. For example, external-db-password . For db.connectionString specify the connection string in keyword=value format, for example, host=<host> port=5432 database=stackrox user=stackrox sslmode=verify-ca For db.persistence delete the entire block. If necessary, you can specify a Certificate Authority for Central to trust the database certificate by adding a TLS block under the top-level spec, as shown in the following example: Update the central custom resource with the following configuration: spec: tls: additionalCAs: - name: db-ca content: | <certificate> central: db: isEnabled: Default 1 connectionString: "host=<host> port=5432 user=<user> sslmode=verify-ca" passwordSecret: name: external-db-password 1 You must not change the value of IsEnabled to Enabled . Click Save . Additional resources Provisioning a database in your PostgreSQL instance 1.4. Changing the subscription channel You can change the update channel for the RHACS Operator by using the OpenShift Container Platform web console or by using the command line. For upgrading to RHACS 4.0 from RHACS 3.74, you must change the update channel. Important You must change the subscription channel for all clusters where you installed the RHACS Operator, including Central and all secured clusters. Prerequisites You must verify that you are using the latest RHACS 3.74 Operator and there are no pending manual Operator upgrades. You must verify that you backed up your Central database. You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Changing the subscription channel by using the web console Use the following instructions for changing the subscription channel by using the web console: Procedure In the Administrator perspective of the OpenShift Container Platform web console, go to Operators Installed Operators . Click the RHACS Operator. Click the Subscription tab. Click the name of the update channel under Update Channel . Select stable , then click Save . For subscriptions with an Automatic approval strategy, the update begins automatically. Go back to the Operators Installed Operators page to monitor the progress of the update. When complete, the status changes to Succeeded and Up to date . For subscriptions with a Manual approval strategy, you can manually approve the update from the Subscription tab. Changing the subscription channel by using command line Use the following instructions for changing the subscription channel by using command line: Procedure Run the following command to change the subscription channel to stable : USD oc -n rhacs-operator \ 1 patch subscriptions.operators.coreos.com rhacs-operator \ --type=merge --patch='{ "spec": { "channel": "stable" }}' 1 If you use Kubernetes, enter kubectl instead of oc . During the update, the RHACS Operator provisions a new deployment called central-db and your data begins migrating. It takes around 30 minutes and happens only after you upgrade. 1.5. Rolling back an Operator upgrade To roll back an Operator upgrade, you must perform the steps described in one of the following sections. You can roll back an Operator upgrade by using the CLI or the OpenShift Container Platform web console. Note If you are rolling back from RHACS 4.0, you can only rollback to the latest patch release version of RHACS 3.74. 1.5.1. Rolling back an Operator upgrade by using the CLI You can roll back the Operator version by using CLI commands. Procedure Delete the OLM subscription by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete subscription rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete subscription rhacs-operator Delete the cluster service version (CSV) by running the following command: For OpenShift Container Platform, run the following command: USD oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator For Kubernetes, run the following command: USD kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator Determine the version you want to roll back to by choosing one of the following options: If the current Central instance is running, query the RHACS API to get the rollback version by running the following command: USD curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo If the current Central instance is not running, perform the following steps: Note This procedure can only be used for RHACS release 3.74 and earlier when the rocksdb database is installed. Ensure the Central deployment is scaled down by running the following command: For OpenShift Container Platform, run the following command: USD oc scale -n <central namespace> -replicas=0 deploy/central For Kubernetes, run the following command: USD kubectl scale -n <central namespace> -replicas=0 deploy/central Save the following pod spec as a YAML file: apiVersion: v1 kind: Pod metadata: name: get--db-version spec: containers: - name: get--db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - "cat /var/lib/stackrox/./migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db Create a pod in your Central namespace by running the following command using the YAML file that you saved: For OpenShift Container Platform, run the following command: USD oc create -n <central namespace> -f pod.yaml For Kubernetes, run the following command: USD kubectl create -n <central namespace> -f pod.yaml After pod creation is complete, get the version by running the following command: For OpenShift Container Platform, run the following command: USD oc logs -n <central namespace> get--db-version For Kubernetes, run the following command: USD kubectl logs -n <central namespace> get--db-version Edit the central-config.yaml ConfigMap to set the maintenance.forceRollBackVersion:<version> parameter by running the following command: For OpenShift Container Platform, run the following command: USD oc get configmap -n <central namespace> central-config -o yaml | sed -e "s/forceRollbackVersion: none/forceRollbackVersion: <version>/" | oc -n <central namespace> apply -f - For Kubernetes, run the following command: USD kubectl get configmap -n <central namespace> central-config -o yaml | sed -e "s/forceRollbackVersion: none/forceRollbackVersion: <version>/" | kubectl -n <central namespace> apply -f - Set the image for the Central deployment using the version string shown in Step 3 as the image tag. For example, run the following command: For OpenShift Container Platform, run the following command: USD oc set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version> For Kubernetes, run the following command: USD kubectl set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version> Verification Ensure that the Central pod starts and has a ready status. If the pod crashes, check the logs to see if the backup was restored. A successful log message appears similar to the following example: Reinstall the Operator on the rolled back channel. For example, 3.74.2 is installed on the rhacs-3.74 channel. 1.5.2. Rolling back an Operator upgrade by using the web console You can roll back the Operator version by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Go to the Operators Installed Operators page. Click the RHACS Operator. On the Operator Details page, select Uninstall Operator from the Actions list. Following this action, the Operator stops running and no longer receives updates. Determine the version you want to roll back to by choosing one of the following options: If the current Central instance is running, you can query the RHACS API to get the rollback version by running the following command from a terminal window: USD curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo You can create a pod and extract the version by performing the following steps: Note This procedure can only be used for RHACS release 3.74 and earlier when the rocksdb database is installed. Go to Workloads Deployments central . Under Deployment details , click the down arrow to the pod count to scale down the pod. Go to Workloads Pods Create Pod and paste the contents of the pod spec as shown in the following example into the editor: apiVersion: v1 kind: Pod metadata: name: get--db-version spec: containers: - name: get--db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - "cat /var/lib/stackrox/./migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db Click Create . After the pod is created, click the Logs tab to get the version string. Update the rollback configuration by performing the following steps: Go to Workloads ConfigMaps central-config and select Edit ConfigMap from the Actions list. Find the forceRollbackVersion line in the value of the central-config.yaml key. Replace none with 3.73.3 , and then save the file. Update Central to the earlier version by performing the following steps: Go to Workloads Deployments central and select Edit Deployment from the Actions list. Update the image name, and then save the changes. Verification Ensure that the Central pod starts and has a ready status. If the pod crashes, check the logs to see if the backup was restored. A successful log message appears similar to the following example: Reinstall the Operator on the rolled back channel. For example, 3.74.2 is installed on the rhacs-3.74 channel. Additional resources Installing Central using the Operator method Operator Lifecycle Manager workflow Manually approving a pending Operator update 1.6. Troubleshooting Operator upgrade issues Follow these instructions to investigate and resolve upgrade-related issues for the RHACS Operator. 1.6.1. Central DB cannot be scheduled Follow the instructions here to troubleshoot a failing Central DB pod during an upgrade: Check the status of the central-db pod: USD oc -n <namespace> get pod -l app=central-db 1 1 If you use Kubernetes, enter kubectl instead of oc . If the status of the pod is Pending , use the describe command to get more details: USD oc -n <namespace> describe po/<central-db-pod-name> 1 1 If you use Kubernetes, enter kubectl instead of oc . You might see the FailedScheduling warning message: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 54s default-scheduler 0/7 nodes are available: 1 Insufficient memory, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 4 Insufficient cpu. preemption: 0/7 nodes are available: 3 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod. This warning message suggests that the scheduled node had insufficient memory to accommodate the pod's resource requirements. If you have a small environment, consider increasing resources on the nodes or adding a larger node that can support the database. Otherwise, consider decreasing the resource requirements for the central-db pod in the custom resource under central db resources . However, running central with fewer resources than the recommended minimum might lead to degraded performance for RHACS. 1.6.2. Central or Secured cluster fails to deploy When RHACS Operator has the following conditions, you must check the custom resource conditions to find the issue: If the Operator fails to deploy Central or Secured Cluster If the Operator fails to apply CR changes to actual resources For Central, run the following command to check the conditions: USD oc -n rhacs-operator describe centrals.platform.stackrox.io 1 1 If you use Kubernetes, enter kubectl instead of oc . For Secured clusters, run the following command to check the conditions: USD oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1 1 If you use Kubernetes, enter kubectl instead of oc . You can identify configuration errors from the conditions output: Example output Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps "central" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: "50": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed Additionally, you can view RHACS pod logs to find more information about the issue. Run the following command to view the logs: oc -n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1 1 If you use Kubernetes, enter kubectl instead of oc . | [
"spec: central: db: isEnabled: Default 1 persistence: persistentVolumeClaim: 2 claimName: central-db size: 100Gi storageClassName: <storage-class-name>",
"oc create secret generic external-db-password \\ 1 --from-file=password=<password.txt> 2",
"spec: tls: additionalCAs: - name: db-ca content: | <certificate> central: db: isEnabled: Default 1 connectionString: \"host=<host> port=5432 user=<user> sslmode=verify-ca\" passwordSecret: name: external-db-password",
"oc -n rhacs-operator \\ 1 patch subscriptions.operators.coreos.com rhacs-operator --type=merge --patch='{ \"spec\": { \"channel\": \"stable\" }}'",
"oc -n rhacs-operator delete subscription rhacs-operator",
"kubectl -n rhacs-operator delete subscription rhacs-operator",
"oc -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"kubectl -n rhacs-operator delete csv -l operators.coreos.com/rhacs-operator.rhacs-operator",
"curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo",
"oc scale -n <central namespace> -replicas=0 deploy/central",
"kubectl scale -n <central namespace> -replicas=0 deploy/central",
"apiVersion: v1 kind: Pod metadata: name: get-previous-db-version spec: containers: - name: get-previous-db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - \"cat /var/lib/stackrox/.previous/migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '\" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db",
"oc create -n <central namespace> -f pod.yaml",
"kubectl create -n <central namespace> -f pod.yaml",
"oc logs -n <central namespace> get-previous-db-version",
"kubectl logs -n <central namespace> get-previous-db-version",
"oc get configmap -n <central namespace> central-config -o yaml | sed -e \"s/forceRollbackVersion: none/forceRollbackVersion: <version>/\" | oc -n <central namespace> apply -f -",
"kubectl get configmap -n <central namespace> central-config -o yaml | sed -e \"s/forceRollbackVersion: none/forceRollbackVersion: <version>/\" | kubectl -n <central namespace> apply -f -",
"oc set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version>",
"kubectl set image -n <central namespace> deploy/central central=registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<version>",
"Clone to Migrate \".previous\", \"\"",
"curl -k -s -u <user>:<password> https://<central hostname>/v1/centralhealth/upgradestatus | jq -r .upgradeStatus.forceRollbackTo",
"apiVersion: v1 kind: Pod metadata: name: get-previous-db-version spec: containers: - name: get-previous-db-version image: registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:<rollback version> command: - sh args: - '-c' - \"cat /var/lib/stackrox/.previous/migration_version.yaml | grep '^image:' | cut -f 2 -d : | tr -d ' '\" volumeMounts: - name: stackrox-db mountPath: /var/lib/stackrox volumes: - name: stackrox-db persistentVolumeClaim: claimName: stackrox-db",
"Clone to Migrate \".previous\", \"\"",
"oc -n <namespace> get pod -l app=central-db 1",
"oc -n <namespace> describe po/<central-db-pod-name> 1",
"Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 54s default-scheduler 0/7 nodes are available: 1 Insufficient memory, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 4 Insufficient cpu. preemption: 0/7 nodes are available: 3 Preemption is not helpful for scheduling, 4 No preemption victims found for incoming pod.",
"oc -n rhacs-operator describe centrals.platform.stackrox.io 1",
"oc -n rhacs-operator describe securedclusters.platform.stackrox.io 1",
"Conditions: Last Transition Time: 2023-04-19T10:49:57Z Status: False Type: Deployed Last Transition Time: 2023-04-19T10:49:57Z Status: True Type: Initialized Last Transition Time: 2023-04-19T10:59:10Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: ReconcileError Status: True Type: Irreconcilable Last Transition Time: 2023-04-19T10:49:57Z Message: No proxy configuration is desired Reason: NoProxyConfig Status: False Type: ProxyConfigFailed Last Transition Time: 2023-04-19T10:49:57Z Message: Deployment.apps \"central\" is invalid: spec.template.spec.containers[0].resources.requests: Invalid value: \"50\": must be less than or equal to cpu limit Reason: InstallError Status: True Type: ReleaseFailed",
"-n rhacs-operator logs deploy/rhacs-operator-controller-manager manager 1"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/upgrading/upgrade-operator |
Chapter 14. Configuring auditing to track events | Chapter 14. Configuring auditing to track events Red Hat build of Keycloak includes a suite of auditing capabilities. You can record every login and administrator action and review those actions in the Admin Console. Red Hat build of Keycloak also includes a Listener SPI that listens for events and can trigger actions. Examples of built-in listeners include log files and sending emails if an event occurs. 14.1. Auditing user events You can record and view every event that affects users. Red Hat build of Keycloak triggers login events for actions such as successful user login, a user entering an incorrect password, or a user account updating. By default, Red Hat build of Keycloak does not store or display events in the Admin Console. Only the error events are logged to the Admin Console and the server's log file. Procedure Use this procedure to start auditing user events. Click Realm settings in the menu. Click the Events tab. Click the User events settings tab. Toggle Save events to ON . User events settings Specify the length of time to store events in the Expiration field. Click Add saved types to see other events you can save. Add types Click Add . Click Clear user events when you want to delete all saved events. Procedure You can now view events. Click the Events tab in the menu. User events To filter events, click Search user event . Search user event 14.1.1. Event types Login events: Event Description Login A user logs in. Register A user registers. Logout A user logs out. Code to Token An application, or client, exchanges a code for a token. Refresh Token An application, or client, refreshes a token. Account events: Event Description Social Link A user account links to a social media provider. Remove Social Link The link from a social media account to a user account severs. Update Email An email address for an account changes. Update Profile A profile for an account changes. Send Password Reset Red Hat build of Keycloak sends a password reset email. Update Password The password for an account changes. Update TOTP The Time-based One-time Password (TOTP) settings for an account changes. Remove TOTP Red Hat build of Keycloak removes TOTP from an account. Send Verify Email Red Hat build of Keycloak sends an email verification email. Verify Email Red Hat build of Keycloak verifies the email address for an account. Each event has a corresponding error event. 14.1.2. Event listener Event listeners listen for events and perform actions based on that event. Red Hat build of Keycloak includes two built-in listeners, the Logging Event Listener and Email Event Listener. 14.1.2.1. The logging event listener When the Logging Event Listener is enabled, this listener writes to a log file when an error event occurs. An example log message from a Logging Event Listener: You can use the Logging Event Listener to protect against hacker bot attacks: Parse the log file for the LOGIN_ERROR event. Extract the IP Address of the failed login event. Send the IP address to an intrusion prevention software framework tool. The Logging Event Listener logs events to the org.keycloak.events log category. Red Hat build of Keycloak does not include debug log events in server logs, by default. To include debug log events in server logs: Change the log level for the org.keycloak.events category Change the log level used by the Logging Event listener. To change the log level used by the Logging Event listener, add the following: bin/kc.[sh|bat] start --spi-events-listener-jboss-logging-success-level=info --spi-events-listener-jboss-logging-error-level=error The valid values for log levels are debug , info , warn , error , and fatal . 14.1.2.2. The Email Event Listener The Email Event Listener sends an email to the user's account when an event occurs and supports the following events: Login Error. Update Password. Update Time-based One-time Password (TOTP). Remove Time-based One-time Password (TOTP). Procedure To enable the Email Listener: Click Realm settings in the menu. Click the Events tab. Click the Event listeners field. Select email . Event listeners You can exclude events by using the --spi-events-listener-email-exclude-events argument. For example: kc.[sh|bat] --spi-events-listener-email-exclude-events=UPDATE_TOTP,REMOVE_TOTP You can set a maximum length of each Event detail in the database by using the --spi-events-store-jpa-max-detail-length argument. This setting is useful if a detail (for example, redirect_uri) is long. For example: kc.[sh|bat] --spi-events-store-jpa-max-detail-length=1000 Also you can set a maximum length of all Event's details by using the --spi-events-store-jpa-max-field-length argument. This setting is useful if you want to adhere to the underlying storage limitation. For example: kc.[sh|bat] --spi-events-store-jpa-max-field-length=2500 14.2. Auditing admin events You can record all actions that are performed by an administrator in the Admin Console. The Admin Console performs administrative actions by invoking the Red Hat build of Keycloak REST interface and Red Hat build of Keycloak audits these REST invocations. You can view the resulting events in the Admin Console. Procedure Use this procedure to start auditing admin actions. Click Realm settings in the menu. Click the Events tab. Click the Admin events settings tab. Toggle Save events to ON . Red Hat build of Keycloak displays the Include representation switch. Toggle Include representation to ON . The Include Representation switch includes JSON documents sent through the admin REST API so you can view the administrators actions. Admin events settings Click Save . To clear the database of stored actions, click Clear admin events . Procedure You can now view admin events. Click Events in the menu. Click the Admin events tab. Admin events When the Include Representation switch is ON, it can lead to storing a lot of information in the database. You can set a maximum length of the representation by using the --spi-events-store-jpa-max-field-length argument. This setting is useful if you want to adhere to the underlying storage limitation. For example: kc.[sh|bat] --spi-events-store-jpa-max-field-length=2500 | [
"11:36:09,965 WARN [org.keycloak.events] (default task-51) type=LOGIN_ERROR, realmId=master, clientId=myapp, userId=19aeb848-96fc-44f6-b0a3-59a17570d374, ipAddress=127.0.0.1, error=invalid_user_credentials, auth_method=openid-connect, auth_type=code, redirect_uri=http://localhost:8180/myapp, code_id=b669da14-cdbb-41d0-b055-0810a0334607, username=admin",
"bin/kc.[sh|bat] start --spi-events-listener-jboss-logging-success-level=info --spi-events-listener-jboss-logging-error-level=error",
"kc.[sh|bat] --spi-events-listener-email-exclude-events=UPDATE_TOTP,REMOVE_TOTP",
"kc.[sh|bat] --spi-events-store-jpa-max-detail-length=1000",
"kc.[sh|bat] --spi-events-store-jpa-max-field-length=2500",
"kc.[sh|bat] --spi-events-store-jpa-max-field-length=2500"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/configuring_auditing_to_track_events |
7.49. environment-modules | 7.49. environment-modules 7.49.1. RHBA-2015:0670 - environment-modules bug fix update Updated environment-modules packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The environment-modules packages provide for the dynamic modification of user environment using module files. Each module file contains the information needed to configure the shell for an application. Once the package is initialized, the environment can be modified on a per-module basis using the module command which interprets module files. Bug Fixes BZ# 979789 Previously, misleading information about available modules in nested module directories was displayed to the user. To fix this bug, the code detecting module versions has been amended, and correct information is now displayed. BZ# 1117307 Prior to this update, modules were not properly unloaded when a loading module file contained the "module unload" command. With this update, the logic in the code for version detection of modules has been modified, and modules that contain the "module unload" command are now unloaded correctly. Users of environment-modules are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-environment-modules |
Chapter 15. Red Hat Quay as a proxy cache for upstream registries | Chapter 15. Red Hat Quay as a proxy cache for upstream registries With the growing popularity of container development, customers increasingly rely on container images from upstream registries like Docker or Google Cloud Platform to get services up and running. Today, registries have rate limitations and throttling on the number of times users can pull from these registries. With this feature, Red Hat Quay will act as a proxy cache to circumvent pull-rate limitations from upstream registries. Adding a cache feature also accelerates pull performance, because images are pulled from the cache rather than upstream dependencies. Cached images are only updated when the upstream image digest differs from the cached image, reducing rate limitations and potential throttling. With Red Hat Quay cache proxy, the following features are available: Specific organizations can be defined as a cache for upstream registries. Configuration of a Quay organization that acts as a cache for a specific upstream registry. This repository can be defined by using the Quay UI, and offers the following configurations: Upstream registry credentials for private repositories or increased rate limiting. Expiration timer to avoid surpassing cache organization size. Global on/off configurable via the configuration application. Caching of entire upstream registries or just a single namespace, for example, all of docker.io or just docker.io/library . Logging of all cache pulls. Cached images scannability by Clair. 15.1. Proxy cache architecture The following image shows the expected design flow and architecture of the proxy cache feature. When a user pulls an image, for example, postgres:14 , from an upstream repository on Red Hat Quay, the repository checks to see if an image is present. If the image does not exist, a fresh pull is initiated. After being pulled, the image layers are saved to cache and server to the user in parallel. The following image depicts an architectural overview of this scenario: If the image in the cache exists, users can rely on Quay's cache to stay up-to-date with the upstream source so that newer images from the cache are automatically pulled. This happens when tags of the original image have been overwritten in the upstream registry. The following image depicts an architectural overview of what happens when the upstream image and cached version of the image are different: If the upstream image and cached version are the same, no layers are pulled and the cached image is delivered to the user. In some cases, users initiate pulls when the upstream registry is down. If this happens with the configured staleness period, the image stored in cache is delivered. If the pull happens after the configured staleness period, the error is propagated to the user. The following image depicts an architectural overview when a pull happens after the configured staleness period: Quay administrators can leverage the configurable size limit of an organization to limit cache size so that backend storage consumption remains predictable. This is achieved by discarding images from the cache according to the frequency in which an image is used. The following image depicts an architectural overview of this scenario: 15.2. Proxy cache limitations Proxy caching with Red Hat Quay has the following limitations: Your proxy cache must have a size limit of greater than, or equal to, the image you want to cache. For example, if your proxy cache organization has a maximum size of 500 MB, and the image a user wants to pull is 700 MB, the image will be cached and will overflow beyond the configured limit. Cached images must have the same properties that images on a Quay repository must have. Currently, only layers requested by the client are cached. 15.3. Using Red Hat Quay to proxy a remote registry The following procedure describes how you can use Red Hat Quay to proxy a remote registry. This procedure is set up to proxy quay.io, which allows users to use podman to pull any public image from any namespace on quay.io. Prerequisites FEATURE_PROXY_CACHE in your config.yaml is set to true . Assigned the Member team role. For more information about team roles, see Users and organizations in Red Hat Quay . Procedure In your Quay organization on the UI, for example, cache-quayio , click Organization Settings on the left hand pane. Optional: Click Add Storage Quota to configure quota management for your organization. For more information about quota management, see Quota Management . Note In some cases, pulling images with Podman might return the following error when quota limit is reached during a pull: unable to pull image: Error parsing image configuration: Error fetching blob: invalid status code from registry 403 (Forbidden) . Error 403 is inaccurate, and occurs because Podman hides the correct API error: Quota has been exceeded on namespace . This known issue will be fixed in a future Podman update. In Remote Registry enter the name of the remote registry to be cached, for example, quay.io , and click Save . Note By adding a namespace to the Remote Registry , for example, quay.io/<namespace> , users in your organization will only be able to proxy from that namespace. Optional: Add a Remote Registry Username and Remote Registry Password . Note If you do not set a Remote Registry Username and Remote Registry Password , you cannot add one without removing the proxy cache and creating a new registry. Optional: Set a time in the Expiration field. Note The default tag Expiration field for cached images in a proxy organization is set to 86400 seconds. In the proxy organization, the tag expiration is refreshed to the value set in the UI's Expiration field every time the tag is pulled. This feature is different than Quay's default individual tag expiration feature. In a proxy organization, it is possible to override the individual tag feature. When this happens, the individual tag's expiration is reset according to the Expiration field of the proxy organization. Expired images will disappear after the allotted time, but are still stored in Quay. The time in which an image is completely deleted, or collected, depends on the Time Machine setting of your organization. The default time for garbage collection is 14 days unless otherwise specified. Click Save . On the CLI, pull a public image from the registry, for example, quay.io, acting as a proxy cache: Important If your organization is set up to pull from a single namespace in the remote registry, the remote registry namespace must be omitted from the URL. For example, podman pull <registry_url>/<organization_name>/<image_name> . 15.3.1. Leveraging storage quota limits in proxy organizations With Red Hat Quay 3.8, the proxy cache feature has been enhanced with an auto-pruning feature for tagged images. The auto-pruning of image tags is only available when a proxied namespace has quota limitations configured. Currently, if an image size is greater than quota for an organization, the image is skipped from being uploaded until an administrator creates the necessary space. Now, when an image is pushed that exceeds the allotted space, the auto-pruning enhancement marks the least recently used tags for deletion. As a result, the new image tag is stored, while the least used image tag is marked for deletion. Important As part of the auto-pruning feature, the tags that are marked for deletion are eventually garbage collected by the garbage collector (gc) worker process. As a result, the quota size restriction is not fully enforced during this period. Currently, the namespace quota size computation does not take into account the size for manifest child. This is a known issue and will be fixed in a future version of Red Hat Quay. 15.3.1.1. Testing the storage quota limits feature in proxy organizations Use the following procedure to test the auto-pruning feature of an organization with proxy cache and storage quota limitations enabled. Prerequisites Your organization is configured to serve as a proxy organization. The following example proxies from quay.io. FEATURE_PROXY_CACHE is set to true in your config.yaml file. FEATURE_QUOTA_MANAGEMENT is set to true in your config.yaml file. Your organization is configured with a quota limit, for example, 150 MB . Procedure Pull an image to your repository from your proxy organization, for example: Depending on the space left in your repository, you might need to pull additional images from your proxy organization, for example: In the Red Hat Quay registry UI, click the name of your repository. Click Tags in the navigation pane and ensure that quay:3.7.9 and quay:3.6.2 are tagged. Pull the last image that will result in your repository exceeding the allotted quota, for example: Refresh the Tags page of your Red Hat Quay registry. The first image that you pushed, for example, quay:3.7.9 should have been auto-pruned. The Tags page should now show quay:3.6.2 and quay:3.5.1 . | [
"podman pull <registry_url>/<organization_name>/<quayio_namespace>/<image_name>",
"podman pull quay-server.example.com/proxytest/projectquay/quay:3.7.9",
"podman pull quay-server.example.com/proxytest/projectquay/quay:3.6.2",
"podman pull quay-server.example.com/proxytest/projectquay/quay:3.5.1"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/use_red_hat_quay/quay-as-cache-proxy |
Chapter 20. Preparing the system for IdM replica installation | Chapter 20. Preparing the system for IdM replica installation The following links list the requirements to install an Identity Management (IdM) replica. Before the installation, verify your system meets these requirements. Ensure the target system meets the general requirements for IdM server installation . Ensure the target system meets the additional, version requirements for IdM replica installation . Authorize the target system for enrollment into the IdM domain. For more information, see one of the following sections that best fits your needs: Authorizing the installation of a replica on an IdM client Authorizing the installation of a replica on a system that is not enrolled into IdM Additional resources Planning the replica topology 20.1. Replica version requirements Red Hat Enterprise Linux (RHEL) 8 replicas only work with Identity Management (IdM) servers running on RHEL 7.4 and later. Before introducing IdM replicas running on RHEL 8 into an existing deployment, upgrade all IdM servers to RHEL 7.4 or later, and change the domain level to 1. In addition, the replica must be running the same or later version of IdM. For example: You have an IdM server installed on Red Hat Enterprise Linux 8 and it uses IdM 4.x packages. You must install the replica also on Red Hat Enterprise Linux 8 or later and use IdM version 4.x or later. This ensures that configuration can be properly copied from the server to the replica. For details on how to display the IdM software version, see Methods for displaying IdM software version . 20.2. Methods for displaying IdM software version You can display the IdM version number with: The IdM WebUI ipa commands rpm commands Displaying version through the WebUI In the IdM WebUI, the software version can be displayed by choosing About from the username menu at the upper-right. Displaying version with ipa commands From the command line, use the ipa --version command. Displaying version with rpm commands If IdM services are not operating properly, you can use the rpm utility to determine the version number of the ipa-server package that is currently installed. 20.3. Authorizing the installation of a replica on an IdM client When installing a replica on an existing Identity Management (IdM) client by running the ipa-replica-install utility, choose Method 1 or Method 2 below to authorize the replica installation. Choose Method 1 if one of the following applies: You want a senior system administrator to perform the initial part of the procedure and a junior administrator to perform the rest. You want to automate your replica installation. Method 1: the ipaservers host group Log in to any IdM host as IdM admin: Add the client machine to the ipaservers host group: Note Membership in the ipaservers group grants the machine elevated privileges similar to the administrator's credentials. Therefore, in the step, the ipa-replica-install utility can be run on the host successfully by a junior system administrator. Method 2: a privileged user's credentials Choose one of the following methods to authorize the replica installation by providing a privileged user's credentials: Let Identity Management (IdM) prompt you for the credentials interactively after you start the ipa-replica-install utility. This is the default behavior. Log in to the client as a privileged user immediately before running the ipa-replica-install utility. The default privileged user is admin : Additional resources To start the installation procedure, see Installing an IdM replica . You can use an Ansible playbook to install IdM replicas. For more information, see Installing an Identity Management replica using an Ansible playbook . 20.4. Authorizing the installation of a replica on a system that is not enrolled into IdM When installing a replica on a system that is not enrolled in the Identity Management (IdM) domain, the ipa-replica-install utility first enrolls the system as a client and then installs the replica components. For this scenario, choose Method 1 or Method 2 below to authorize the replica installation. Choose Method 1 if one of the following applies: You want a senior system administrator to perform the initial part of the procedure and a junior administrator to perform the rest. You want to automate your replica installation. Method 1: a random password generated on an IdM server Enter the following commands on any server in the domain: Log in as the administrator. Add the external system as an IdM host. Use the --random option with the ipa host-add command to generate a random one-time password to be used for the subsequent replica installation. The generated password will become invalid after you use it to enroll the machine into the IdM domain. It will be replaced with a proper host keytab after the enrollment is finished. Add the system to the ipaservers host group. Note Membership in the ipaservers group grants the machine elevated privileges similar to the administrator's credentials. Therefore, in the step, the ipa-replica-install utility can be run on the host successfully by a junior system administrator that provides the generated random password. Method 2: a privileged user's credentials Using this method, you authorize the replica installation by providing a privileged user's credentials. The default privileged user is admin . No action is required prior to running the IdM replica installation utility. Add the principal name and password options ( --principal admin --admin-password password ) to the ipa-replica-install command directly during the installation. Additional resources To start the installation procedure, see Installing an IdM replica . You can use an Ansible playbook to install IdM replicas. For more information, see Installing an Identity Management replica using an Ansible playbook . | [
"ipa --version VERSION: 4.8.0 , API_VERSION: 2.233",
"rpm -q ipa-server ipa-server-4.8.0-11 .module+el8.1.0+4247+9f3fd721.x86_64",
"kinit admin",
"ipa hostgroup-add-member ipaservers --hosts client.idm.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.idm.example.com, client.idm.example.com ------------------------- Number of members added 1 -------------------------",
"kinit admin",
"kinit admin",
"ipa host-add replica.example.com --random -------------------------------------------------- Added host \"replica.example.com\" -------------------------------------------------- Host name: replica.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com",
"ipa hostgroup-add-member ipaservers --hosts replica.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.example.com, replica.example.com ------------------------- Number of members added 1 -------------------------"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/preparing-the-system-for-ipa-replica-installation_installing-identity-management |
Chapter 11. Nodes | Chapter 11. Nodes 11.1. Node maintenance Nodes can be placed into maintenance mode by using the oc adm utility or NodeMaintenance custom resources (CRs). Note The node-maintenance-operator (NMO) is no longer shipped with OpenShift Virtualization. It is deployed as a standalone Operator from the OperatorHub in the OpenShift Container Platform web console or by using the OpenShift CLI ( oc ). For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. Important Virtual machines (VMs) must have a persistent volume claim (PVC) with a shared ReadWriteMany (RWX) access mode to be live migrated. The Node Maintenance Operator watches for new or deleted NodeMaintenance CRs. When a new NodeMaintenance CR is detected, no new workloads are scheduled and the node is cordoned off from the rest of the cluster. All pods that can be evicted are evicted from the node. When a NodeMaintenance CR is deleted, the node that is referenced in the CR is made available for new workloads. Note Using a NodeMaintenance CR for node maintenance tasks achieves the same results as the oc adm cordon and oc adm drain commands using standard OpenShift Container Platform custom resource processing. 11.1.1. Eviction strategies Placing a node into maintenance marks the node as unschedulable and drains all the VMs and pods from it. You can configure eviction strategies for virtual machines (VMs) or for the cluster. VM eviction strategy The VM LiveMigrate eviction strategy ensures that a virtual machine instance (VMI) is not interrupted if the node is placed into maintenance or drained. VMIs with this eviction strategy will be live migrated to another node. You can configure eviction strategies for virtual machines (VMs) by using the OpenShift Container Platform web console or the command line . Important The default eviction strategy is LiveMigrate . A non-migratable VM with a LiveMigrate eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a Pending or Scheduling state unless you shut down the VM manually. You must set the eviction strategy of non-migratable VMs to LiveMigrateIfPossible , which does not block an upgrade, or to None , for VMs that should not be migrated. Cluster eviction strategy You can configure an eviction strategy for the cluster to prioritize workload continuity or infrastructure upgrade. Table 11.1. Cluster eviction strategies Eviction strategy Description Interrupts workflow Blocks upgrades LiveMigrate 1 Prioritizes workload continuity over upgrades. No Yes 2 LiveMigrateIfPossible Prioritizes upgrades over workload continuity to ensure that the environment is updated. Yes No None 3 Shuts down VMs with no eviction strategy. Yes No Default eviction strategy for multi-node clusters. If a VM blocks an upgrade, you must shut down the VM manually. Default eviction strategy for single-node OpenShift. 11.1.1.1. Configuring a VM eviction strategy using the command line You can configure an eviction strategy for a virtual machine (VM) by using the command line. Important The default eviction strategy is LiveMigrate . A non-migratable VM with a LiveMigrate eviction strategy might prevent nodes from draining or block an infrastructure upgrade because the VM is not evicted from the node. This situation causes a migration to remain in a Pending or Scheduling state unless you shut down the VM manually. You must set the eviction strategy of non-migratable VMs to LiveMigrateIfPossible , which does not block an upgrade, or to None , for VMs that should not be migrated. Procedure Edit the VirtualMachine resource by running the following command: USD oc edit vm <vm_name> -n <namespace> Example eviction strategy apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1 # ... 1 Specify the eviction strategy. The default value is LiveMigrate . Restart the VM to apply the changes: USD virtctl restart <vm_name> -n <namespace> 11.1.1.2. Configuring a cluster eviction strategy by using the command line You can configure an eviction strategy for a cluster by using the command line. Procedure Edit the hyperconverged resource by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the cluster eviction strategy as shown in the following example: Example cluster eviction strategy apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate # ... 11.1.2. Run strategies A virtual machine (VM) configured with spec.running: true is immediately restarted. The spec.runStrategy key provides greater flexibility for determining how a VM behaves under certain conditions. Important The spec.runStrategy and spec.running keys are mutually exclusive. Only one of them can be used. A VM configuration with both keys is invalid. 11.1.2.1. Run strategies The spec.runStrategy key has four possible values: Always The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason. This is the same behavior as running: true . RerunOnFailure The VMI is re-created on another node if the instance fails. The instance is not re-created if the VM stops successfully, such as when it is shut down. Manual You control the VMI state manually with the start , stop , and restart virtctl client commands. The VM is not automatically restarted. Halted No VMI is present when a VM is created. This is the same behavior as running: false . Different combinations of the virtctl start , stop and restart commands affect the run strategy. The following table describes a VM's transition between states. The first column shows the VM's initial run strategy. The remaining columns show a virtctl command and the new run strategy after that command is run. Table 11.2. Run strategy before and after virtctl commands Initial run strategy Start Stop Restart Always - Halted Always RerunOnFailure - Halted RerunOnFailure Manual Manual Manual Manual Halted Always - - Note If a node in a cluster installed by using installer-provisioned infrastructure fails the machine health check and is unavailable, VMs with runStrategy: Always or runStrategy: RerunOnFailure are rescheduled on a new node. 11.1.2.2. Configuring a VM run strategy by using the command line You can configure a run strategy for a virtual machine (VM) by using the command line. Important The spec.runStrategy and spec.running keys are mutually exclusive. A VM configuration that contains values for both keys is invalid. Procedure Edit the VirtualMachine resource by running the following command: USD oc edit vm <vm_name> -n <namespace> Example run strategy apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always # ... 11.1.3. Maintaining bare metal nodes When you deploy OpenShift Container Platform on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks. When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance. 11.1.4. Additional resources About live migration 11.2. Managing node labeling for obsolete CPU models You can schedule a virtual machine (VM) on a node as long as the VM CPU model and policy are supported by the node. 11.2.1. About node labeling for obsolete CPU models The OpenShift Virtualization Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs. By default, the following CPU models are eliminated from the list of labels generated for the node: Example 11.1. Obsolete CPU models This predefined list is not visible in the HyperConverged CR. You cannot remove CPU models from this list, but you can add to the list by editing the spec.obsoleteCPUs.cpuModels field of the HyperConverged CR. 11.2.2. About node labeling for CPU features Through the process of iteration, the base CPU features in the minimum CPU model are eliminated from the list of labels generated for the node. For example: An environment might have two supported CPU models: Penryn and Haswell . If Penryn is specified as the CPU model for minCPU , each base CPU feature for Penryn is compared to the list of CPU features supported by Haswell . Example 11.2. CPU features supported by Penryn Example 11.3. CPU features supported by Haswell If both Penryn and Haswell support a specific CPU feature, a label is not created for that feature. Labels are generated for CPU features that are supported only by Haswell and not by Penryn . Example 11.4. Node labels created for CPU features after iteration 11.2.3. Configuring obsolete CPU models You can configure a list of obsolete CPU models by editing the HyperConverged custom resource (CR). Procedure Edit the HyperConverged custom resource, specifying the obsolete CPU models in the obsoleteCPUs array. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - "<obsolete_cpu_1>" - "<obsolete_cpu_2>" minCPUModel: "<minimum_cpu_model>" 2 1 Replace the example values in the cpuModels array with obsolete CPU models. Any value that you specify is added to a predefined list of obsolete CPU models. The predefined list is not visible in the CR. 2 Replace this value with the minimum CPU model that you want to use for basic CPU features. If you do not specify a value, Penryn is used by default. 11.3. Preventing node reconciliation Use skip-node annotation to prevent the node-labeller from reconciling a node. 11.3.1. Using skip-node annotation If you want the node-labeller to skip a node, annotate that node by using the oc CLI. Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Annotate the node that you want to skip by running the following command: USD oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1 1 Replace <node_name> with the name of the relevant node to skip. Reconciliation resumes on the cycle after the node annotation is removed or set to false. 11.3.2. Additional resources Managing node labeling for obsolete CPU models 11.4. Deleting a failed node to trigger virtual machine failover If a node fails and node health checks are not deployed on your cluster, virtual machines (VMs) with runStrategy: Always configured are not automatically relocated to healthy nodes. 11.4.1. Prerequisites A node where a virtual machine was running has the NotReady condition . The virtual machine that was running on the failed node has runStrategy set to Always . You have installed the OpenShift CLI ( oc ). 11.4.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 11.4.3. Verifying virtual machine failover After all resources are terminated on the unhealthy node, a new virtual machine instance (VMI) is automatically created on a healthy node for each relocated VM. To confirm that the VMI was created, view all VMIs by using the oc CLI. 11.4.3.1. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A | [
"oc edit vm <vm_name> -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: evictionStrategy: LiveMigrateIfPossible 1",
"virtctl restart <vm_name> -n <namespace>",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: evictionStrategy: LiveMigrate",
"oc edit vm <vm_name> -n <namespace>",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: runStrategy: Always",
"\"486\" Conroe athlon core2duo coreduo kvm32 kvm64 n270 pentium pentium2 pentium3 pentiumpro phenom qemu32 qemu64",
"apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc",
"aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave",
"aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: obsoleteCPUs: cpuModels: 1 - \"<obsolete_cpu_1>\" - \"<obsolete_cpu_2>\" minCPUModel: \"<minimum_cpu_model>\" 2",
"oc annotate node <node_name> node-labeller.kubevirt.io/skip-node=true 1",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get vmis -A"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/virtualization/nodes |
Chapter 6. Management of OSDs using the Ceph Orchestrator | Chapter 6. Management of OSDs using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrators to manage OSDs of a Red Hat Ceph Storage cluster. 6.1. Ceph OSDs When a Red Hat Ceph Storage cluster is up and running, you can add OSDs to the storage cluster at runtime. A Ceph OSD generally consists of one ceph-osd daemon for one storage drive and its associated journal within a node. If a node has multiple storage drives, then map one ceph-osd daemon for each drive. Red Hat recommends checking the capacity of a cluster regularly to see if it is reaching the upper end of its storage capacity. As a storage cluster reaches its near full ratio, add one or more OSDs to expand the storage cluster's capacity. When you want to reduce the size of a Red Hat Ceph Storage cluster or replace the hardware, you can also remove an OSD at runtime. If the node has multiple storage drives, you might also need to remove one of the ceph-osd daemon for that drive. Generally, it's a good idea to check the capacity of the storage cluster to see if you are reaching the upper end of its capacity. Ensure that when you remove an OSD that the storage cluster is not at its near full ratio. Important Do not let a storage cluster reach the full ratio before adding an OSD. OSD failures that occur after the storage cluster reaches the near full ratio can cause the storage cluster to exceed the full ratio. Ceph blocks write access to protect the data until you resolve the storage capacity issues. Do not remove OSDs without considering the impact on the full ratio first. 6.2. Ceph OSD node configuration Configure Ceph OSDs and their supporting hardware similarly as a storage strategy for the pool(s) that will use the OSDs. Ceph prefers uniform hardware across pools for a consistent performance profile. For best performance, consider a CRUSH hierarchy with drives of the same type or size. If you add drives of dissimilar size, adjust their weights accordingly. When you add the OSD to the CRUSH map, consider the weight for the new OSD. Hard drive capacity grows approximately 40% per year, so newer OSD nodes might have larger hard drives than older nodes in the storage cluster, that is, they might have a greater weight. Before doing a new installation, review the Requirements for Installing Red Hat Ceph Storage chapter in the Installation Guide . 6.3. Automatically tuning OSD memory The OSD daemons adjust the memory consumption based on the osd_memory_target configuration option. The option osd_memory_target sets OSD memory based upon the available RAM in the system. If Red Hat Ceph Storage is deployed on dedicated nodes that do not share memory with other services, cephadm automatically adjusts the per-OSD consumption based on the total amount of RAM and the number of deployed OSDs. Important By default, the osd_memory_target_autotune parameter is set to true in the Red Hat Ceph Storage cluster. Syntax Cephadm starts with a fraction mgr/cephadm/autotune_memory_target_ratio , which defaults to 0.7 of the total RAM in the system, subtract off any memory consumed by non-autotuned daemons such as non-OSDS and for OSDs for which osd_memory_target_autotune is false, and then divide by the remaining OSDs. The osd_memory_target parameter is calculated as follows: Syntax SPACE_ALLOCATED_FOR_OTHER_DAEMONS may optionally include the following daemon space allocations: Alertmanager: 1 GB Grafana: 1 GB Ceph Manager: 4 GB Ceph Monitor: 2 GB Node-exporter: 1 GB Prometheus: 1 GB For example, if a node has 24 OSDs and has 251 GB RAM space, then osd_memory_target is 7860684936 . The final targets are reflected in the configuration database with options. You can view the limits and the current memory consumed by each daemon from the ceph orch ps output under MEM LIMIT column. Note The default setting of osd_memory_target_autotune true is unsuitable for hyperconverged infrastructures where compute and Ceph storage services are colocated. In a hyperconverged infrastructure, the autotune_memory_target_ratio can be set to 0.2 to reduce the memory consumption of Ceph. Example You can manually set a specific memory target for an OSD in the storage cluster. Example You can manually set a specific memory target for an OSD host in the storage cluster. Syntax Example Note Enabling osd_memory_target_autotune overwrites existing manual OSD memory target settings. To prevent daemon memory from being tuned even when the osd_memory_target_autotune option or other similar options are enabled, set the _no_autotune_memory label on the host. Syntax You can exclude an OSD from memory autotuning by disabling the autotune option and setting a specific memory target. Example 6.4. Listing devices for Ceph OSD deployment You can check the list of available devices before deploying OSDs using the Ceph Orchestrator. The commands are used to print a list of devices discoverable by Cephadm. A storage device is considered available if all of the following conditions are met: The device must have no partitions. The device must not have any LVM state. The device must not be mounted. The device must not contain a file system. The device must not contain a Ceph BlueStore OSD. The device must be larger than 5 GB. Note Ceph will not provision an OSD on a device that is not available. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Using the --wide option provides all details relating to the device, including any reasons that the device might not be eligible for use as an OSD. This option does not support NVMe devices. Optional: To enable Health , Ident , and Fault fields in the output of ceph orch device ls , run the following commands: Note These fields are supported by libstoragemgmt library and currently supports SCSI, SAS, and SATA devices. As root user outside the Cephadm shell, check your hardware's compatibility with libstoragemgmt library to avoid unplanned interruption to services: Example In the output, you see the Health Status as Good with the respective SCSI VPD 0x83 ID. Note If you do not get this information, then enabling the fields might cause erratic behavior of devices. Log back into the Cephadm shell and enable libstoragemgmt support: Example Once this is enabled, ceph orch device ls gives the output of Health field as Good . Verification List the devices: Example 6.5. Zapping devices for Ceph OSD deployment You need to check the list of available devices before deploying OSDs. If there is no space available on the devices, you can clear the data on the devices by zapping them. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Clear the data of a device: Syntax Example Verification Verify the space is available on the device: Example You will see that the field under Available is Yes . Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide for more information. 6.6. Deploying Ceph OSDs on all available devices You can deploy all OSDS on all the available devices. Cephadm allows the Ceph Orchestrator to discover and deploy the OSDs on any available and unused storage device. To deploy OSDs all available devices, run the command without the unmanaged parameter and then re-run the command with the parameter to prevent from creating future OSDs. Note The deployment of OSDs with --all-available-devices is generally used for smaller clusters. For larger clusters, use the OSD specification file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Deploy OSDs on all available devices: Example The effect of ceph orch apply is persistent which means that the Orchestrator automatically finds the device, adds it to the cluster, and creates new OSDs. This occurs under the following conditions: New disks or drives are added to the system. Existing disks or drives are zapped. An OSD is removed and the devices are zapped. You can disable automatic creation of OSDs on all the available devices by using the --unmanaged parameter. Example Setting the parameter --unmanaged to true disables the creation of OSDs and also there is no change if you apply a new OSD service. Note The command ceph orch daemon add creates new OSDs, but does not add an OSD service. Verification List the service: Example View the details of the node and devices: Example Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.7. Deploying Ceph OSDs on specific devices and hosts You can deploy all the Ceph OSDs on specific devices and hosts using the Ceph Orchestrator. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure Log into the Cephadm shell: Example List the available devices to deploy OSDs: Syntax Example Deploy OSDs on specific devices and hosts: Syntax Example To deploy ODSs on a raw physical device, without an LVM layer, use the --method raw option. Syntax Example Note If you have separate DB or WAL devices, the ratio of block to DB or WAL devices MUST be 1:1. Verification List the service: Example View the details of the node and devices: Example List the hosts, daemons, and processes: Syntax Example Additional Resources See the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.8. Advanced service specifications and filters for deploying OSDs Service Specification of type OSD is a way to describe a cluster layout using the properties of disks. It gives the user an abstract way to tell Ceph which disks should turn into an OSD with the required configuration without knowing the specifics of device names and paths. For each device and each host, define a yaml file or a json file. General settings for OSD specifications service_type : 'osd': This is mandatory to create OSDS service_id : Use the service name or identification you prefer. A set of OSDs is created using the specification file. This name is used to manage all the OSDs together and represent an Orchestrator service. placement : This is used to define the hosts on which the OSDs need to be deployed. You can use on the following options: host_pattern : '*' - A host name pattern used to select hosts. label : 'osd_host' - A label used in the hosts where OSD need to be deployed. hosts : 'host01', 'host02' - An explicit list of host names where OSDs needs to be deployed. selection of devices : The devices where OSDs are created. This allows us to separate an OSD from different devices. You can create only BlueStore OSDs which have three components: OSD data: contains all the OSD data WAL: BlueStore internal journal or write-ahead Log DB: BlueStore internal metadata data_devices : Define the devices to deploy OSD. In this case, OSDs are created in a collocated schema. You can use filters to select devices and folders. wal_devices : Define the devices used for WAL OSDs. You can use filters to select devices and folders. db_devices : Define the devices for DB OSDs. You can use the filters to select devices and folders. encrypted : An optional parameter to encrypt information on the OSD which can set to either True or False unmanaged : An optional parameter, set to False by default. You can set it to True if you do not want the Orchestrator to manage the OSD service. block_wal_size : User-defined value, in bytes. block_db_size : User-defined value, in bytes. osds_per_device : User-defined value for deploying more than one OSD per device. method : An optional parameter to specify if an OSD is created with an LVM layer or not. Set to raw if you want to create OSDs on raw physical devices that do not include an LVM layer. If you have separate DB or WAL devices, the ratio of block to DB or WAL devices MUST be 1:1. Filters for specifying devices Filters are used in conjunction with the data_devices , wal_devices and db_devices parameters. Name of the filter Description Syntax Example Model Target specific disks. You can get details of the model by running lsblk -o NAME,FSTYPE,LABEL,MOUNTPOINT,SIZE,MODEL command or smartctl -i / DEVIVE_PATH Model: DISK_MODEL_NAME Model: MC-55-44-XZ Vendor Target specific disks Vendor: DISK_VENDOR_NAME Vendor: Vendor Cs Size Specification Includes disks of an exact size size: EXACT size: '10G' Size Specification Includes disks size of which is within the range size: LOW:HIGH size: '10G:40G' Size Specification Includes disks less than or equal to in size size: :HIGH size: ':10G' Size Specification Includes disks equal to or greater than in size size: LOW: size: '40G:' Rotational Rotational attribute of the disk. 1 matches all disks that are rotational and 0 matches all the disks that are non-rotational. If rotational =0, then OSD is configured with SSD or NVME. If rotational=1 then the OSD is configured with HDD. rotational: 0 or 1 rotational: 0 All Considers all the available disks all: true all: true Limiter When you have specified valid filters but want to limit the amount of matching disks you can use the 'limit' directive. It should be used only as a last resort. limit: NUMBER limit: 2 Note To create an OSD with non-collocated components in the same host, you have to specify the different types of devices used and the devices should be on the same host. Note The devices used for deploying OSDs must be supported by libstoragemgmt . Additional Resources See the Deploying Ceph OSDs using the advanced specifications section in the Red Hat Ceph Storage Operations Guide . For more information on libstoragemgmt , see the Listing devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide . 6.9. Deploying Ceph OSDs using advanced service specifications The service specification of type OSD is a way to describe a cluster layout using the properties of disks. It gives the user an abstract way to tell Ceph which disks should turn into an OSD with the required configuration without knowing the specifics of device names and paths. You can deploy the OSD for each device and each host by defining a yaml file or a json file. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. All manager and monitor daemons are deployed. Procedure On the monitor node, create the osd_spec.yaml file: Example Edit the osd_spec.yaml file to include the following details: Syntax Simple scenarios: In these cases, all the nodes have the same set-up. Example Example Simple scenario: In this case, all the nodes have the same setup with OSD devices created in raw mode, without an LVM layer. Example Advanced scenario: This would create the desired layout by using all HDDs as data_devices with two SSD assigned as dedicated DB or WAL devices. The remaining SSDs are data_devices that have the NVMEs vendors assigned as dedicated DB or WAL devices. Example Advanced scenario with non-uniform nodes: This applies different OSD specs to different hosts depending on the host_pattern key. Example Advanced scenario with dedicated WAL and DB devices: Example Advanced scenario with multiple OSDs per device: Example For pre-created volumes, edit the osd_spec.yaml file to include the following details: Syntax Example For OSDs by ID, edit the osd_spec.yaml file to include the following details: Note This configuration is applicable for Red Hat Ceph Storage 5.3z1 and later releases. For earlier releases, use pre-created lvm. Syntax Example For OSDs by path, edit the osd_spec.yaml file to include the following details: Note This configuration is applicable for Red Hat Ceph Storage 5.3z1 and later releases. For earlier releases, use pre-created lvm. Syntax Example Mount the YAML file under a directory in the container: Example Navigate to the directory: Example Before deploying OSDs, do a dry run: Note This step gives a preview of the deployment, without deploying the daemons. Example Deploy OSDs using service specification: Syntax Example Verification List the service: Example View the details of the node and devices: Example Additional Resources See the Advanced service specifications and filters for deploying OSDs section in the Red Hat Ceph Storage Operations Guide . 6.10. Removing the OSD daemons using the Ceph Orchestrator You can remove the OSD from a cluster by using Cephadm. Removing an OSD from a cluster involves two steps: Evacuates all placement groups (PGs) from the cluster. Removes the PG-free OSDs from the cluster. The --zap option removed the volume groups, logical volumes, and the LVM metadata. Note After removing OSDs, if the drives the OSDs were deployed on once again become available, cephadm` might automatically try to deploy more OSDs on these drives if they match an existing drivegroup specification. If you deployed the OSDs you are removing with a spec and do not want any new OSDs deployed on the drives after removal, modify the drivegroup specification before removal. While deploying OSDs, if you have used --all-available-devices option, set unmanaged: true to stop it from picking up new drives at all. For other deployments, modify the specification. See the Deploying Ceph OSDs using advanced service specifications for more details. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Ceph Monitor, Ceph Manager and Ceph OSD daemons are deployed on the storage cluster. Procedure Log into the Cephadm shell: Example Check the device and the node from which the OSD has to be removed: Example Remove the OSD: Syntax Example Note If you remove the OSD from the storage cluster without an option, such as --replace , the device is removed from the storage cluster completely. If you want to use the same device for deploying OSDs, you have to first zap the device before adding it to the storage cluster. Optional: To remove multiple OSDs from a specific node, run the following command: Syntax Example Check the status of the OSD removal: Example When no PGs are left on the OSD, it is decommissioned and removed from the cluster. Verification Verify the details of the devices and the nodes from which the Ceph OSDs are removed: Example Additional Resources See the Deploying Ceph OSDs on all available devices section in the Red Hat Ceph Storage Operations Guide for more information. See the Deploying Ceph OSDs on specific devices and hosts section in the Red Hat Ceph Storage Operations Guide for more information. See the Zapping devices for Ceph OSD deployment section in the Red Hat Ceph Storage Operations Guide for more information on clearing space on devices. 6.11. Replacing the OSDs using the Ceph Orchestrator When disks fail, you can replace the physical storage device and reuse the same OSD ID to avoid having to reconfigure the CRUSH map. You can replace the OSDs from the cluster using the --replace option. Note If you want to replace a single OSD, see Deploying Ceph OSDs on specific devices and hosts . If you want to deploy OSDs on all available devices, see Deploying Ceph OSDs on all available devices . This option preserves the OSD ID using the ceph orch rm command. The OSD is not permanently removed from the CRUSH hierarchy, but is assigned the destroyed flag. This flag is used to determine the OSD IDs that can be reused in the OSD deployment. The destroyed flag is used to determine which OSD id is reused in the OSD deployment. Similar to rm command, replacing an OSD from a cluster involves two steps: Evacuating all placement groups (PGs) from the cluster. Removing the PG-free OSD from the cluster. If you use OSD specification for deployment, your newly added disk is assigned the OSD ID of their replaced counterparts. Note After removing OSDs, if the drives the OSDs were deployed on once again become available, cephadm might automatically try to deploy more OSDs on these drives if they match an existing drivegroup specification. If you deployed the OSDs you are removing with a spec and do not want any new OSDs deployed on the drives after removal, modify the drivegroup specification before removal. While deploying OSDs, if you have used --all-available-devices option, set unmanaged: true to stop it from picking up new drives at all. For other deployments, modify the specification. See the Deploying Ceph OSDs using advanced service specifications for more details. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager, and OSD daemons are deployed on the storage cluster. A new OSD that replaces the removed OSD must be created on the same host from which the OSD was removed. Procedure Log into the Cephadm shell: Example Ensure to dump and save a mapping of your OSD configurations for future references: Example Check the device and the node from which the OSD has to be replaced: Example Replace the OSD: Important If the storage cluster has health_warn or other errors associated with it, check and try to fix any errors before replacing the OSD to avoid data loss. Syntax The --force option can be used when there are ongoing operations on the storage cluster. Example Check the status of the OSD replacement: Example Stop the orchestrator to apply any existing OSD specification: Example Zap the OSD devices that have been removed: Example Resume the Orcestrator from pause mode Example Check the status of the OSD replacement: Example Verification Verify the details of the devices and the nodes from which the Ceph OSDs are replaced: Example You can see an OSD with the same id as the one you replaced running on the same host. Verify that the db_device for the new deployed OSDs is the replaced db_device : Example Additional Resources See the Deploying Ceph OSDs on all available devices section in the Red Hat Ceph Storage Operations Guide for more information. See the Deploying Ceph OSDs on specific devices and hosts section in the Red Hat Ceph Storage Operations Guide for more information. 6.12. Replacing the OSDs with pre-created LVM After purging the OSD with the ceph-volume lvm zap command, if the directory is not present, then you can replace the OSDs with the OSd service specification file with the pre-created LVM. Prerequisites A running Red Hat Ceph Storage cluster. Failed OSD Procedure Log into the Cephadm shell: Example Remove the OSD: Syntax Example Verify the OSD is destroyed: Example Zap and remove the OSD using the ceph-volume command: Syntax Example Check the OSD topology: Example Recreate the OSD with a specification file corresponding to that specific OSD topology: Example Apply the updated specification file: Example Verify the OSD is back: Example 6.13. Replacing the OSDs in a non-colocated scenario When the an OSD fails in a non-colocated scenario, you can replace the WAL/DB devices. The procedure is the same for DB and WAL devices. You need to edit the paths under db_devices for DB devices and paths under wal_devices for WAL devices. Prerequisites A running Red Hat Ceph Storage cluster. Daemons are non-colocated. Failed OSD Procedure Identify the devices in the cluster: Example Log into the Cephadm shell: Example Identify the OSDs and their DB device: Example In the osds.yaml file, set unmanaged parameter to true , else cephadm redeploys the OSDs: Example Apply the updated specification file: Example Check the status: Example Remove the OSDs. Ensure to use the --zap option to remove hte backend services and the --replace option to retain the OSD IDs: Example Check the status: Example Edit the osds.yaml specification file to change unmanaged parameter to false and replace the path to the DB device if it has changed after the device got physically replaced: Example In the above example, /dev/sdh is replaced with /dev/sde . Important If you use the same host specification file to replace the faulty DB device on a single OSD node, modify the host_pattern option to specify only the OSD node, else the deployment fails and you cannot find the new DB device on other hosts. Reapply the specification file with the --dry-run option to ensure the OSDs shall be deployed with the new DB device: Example Apply the specification file: Example Check the OSDs are redeployed: Example Verification From the OSD host where the OSDS are redeployed, verify if they are on the new DB device: Example 6.14. Stopping the removal of the OSDs using the Ceph Orchestrator You can stop the removal of only the OSDs that are queued for removal. This resets the initial state of the OSD and takes it off the removal queue. If the OSD is in the process of removal, then you cannot stop the process. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager and OSD daemons are deployed on the cluster. Remove OSD process initiated. Procedure Log into the Cephadm shell: Example Check the device and the node from which the OSD was initiated to be removed: Example Stop the removal of the queued OSD: Syntax Example Check the status of the OSD removal: Example Verification Verify the details of the devices and the nodes from which the Ceph OSDs were queued for removal: Example Additional Resources See Removing the OSD daemons using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 6.15. Activating the OSDs using the Ceph Orchestrator You can activate the OSDs in the cluster in cases where the operating system of the host was reinstalled. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Monitor, Manager and OSD daemons are deployed on the storage cluster. Procedure Log into the Cephadm shell: Example After the operating system of the host is reinstalled, activate the OSDs: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 6.16. Observing the data migration When you add or remove an OSD to the CRUSH map, Ceph begins rebalancing the data by migrating placement groups to the new or existing OSD(s). You can observe the data migration using ceph-w command. Prerequisites A running Red Hat Ceph Storage cluster. Recently added or removed an OSD. Procedure To observe the data migration: Example Watch as the placement group states change from active+clean to active, some degraded objects , and finally active+clean when migration completes. To exit the utility, press Ctrl + C . 6.17. Recalculating the placement groups Placement groups (PGs) define the spread of any pool data across the available OSDs. A placement group is built upon the given redundancy algorithm to be used. For a 3-way replication, the redundancy is defined to use three different OSDs. For erasure-coded pools, the number of OSDs to use is defined by the number of chunks. When defining a pool the number of placement groups defines the grade of granularity the data is spread with across all available OSDs. The higher the number the better the equalization of capacity load can be. However, since handling the placement groups is also important in case of reconstruction of data, the number is significant to be carefully chosen upfront. To support calculation a tool is available to produce agile environments. During the lifetime of a storage cluster a pool may grow above the initially anticipated limits. With the growing number of drives a recalculation is recommended. The number of placement groups per OSD should be around 100. When adding more OSDs to the storage cluster the number of PGs per OSD will lower over time. Starting with 120 drives initially in the storage cluster and setting the pg_num of the pool to 4000 will end up in 100 PGs per OSD, given with the replication factor of three. Over time, when growing to ten times the number of OSDs, the number of PGs per OSD will go down to ten only. Because a small number of PGs per OSD will tend to an unevenly distributed capacity, consider adjusting the PGs per pool. Adjusting the number of placement groups can be done online. Recalculating is not only a recalculation of the PG numbers, but will involve data relocation, which will be a lengthy process. However, the data availability will be maintained at any time. Very high numbers of PGs per OSD should be avoided, because reconstruction of all PGs on a failed OSD will start at once. A high number of IOPS is required to perform reconstruction in a timely manner, which might not be available. This would lead to deep I/O queues and high latency rendering the storage cluster unusable or will result in long healing times. Additional Resources See the PG calculator for calculating the values by a given use case. See the Erasure Code Pools chapter in the Red Hat Ceph Storage Strategies Guide for more information. | [
"ceph config set osd osd_memory_target_autotune true",
"osd_memory_target = TOTAL_RAM_OF_THE_OSD * (1048576) * (autotune_memory_target_ratio) / NUMBER_OF_OSDS_IN_THE_OSD_NODE - ( SPACE_ALLOCATED_FOR_OTHER_DAEMONS )",
"ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2",
"ceph config set osd.123 osd_memory_target 7860684936",
"ceph config set osd/host: HOSTNAME osd_memory_target TARGET_BYTES",
"ceph config set osd/host:host01 osd_memory_target 1000000000",
"ceph orch host label add HOSTNAME _no_autotune_memory",
"ceph config set osd.123 osd_memory_target_autotune false ceph config set osd.123 osd_memory_target 16G",
"cephadm shell",
"ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"cephadm shell lsmcli ldl",
"cephadm shell ceph config set mgr mgr/cephadm/device_enhanced_scan true",
"ceph orch device ls",
"cephadm shell",
"ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch device zap HOSTNAME FILE_PATH --force",
"ceph orch device zap host02 /dev/sdb --force",
"ceph orch device ls",
"cephadm shell",
"ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch apply osd --all-available-devices",
"ceph orch apply osd --all-available-devices --unmanaged=true",
"ceph orch ls",
"ceph osd tree",
"cephadm shell",
"ceph orch device ls [--hostname= HOSTNAME_1 HOSTNAME_2 ] [--wide] [--refresh]",
"ceph orch device ls --wide --refresh",
"ceph orch daemon add osd HOSTNAME : DEVICE_PATH",
"ceph orch daemon add osd host02:/dev/sdb",
"ceph orch daemon add osd --method raw HOSTNAME : DEVICE_PATH",
"ceph orch daemon add osd --method raw host02:/dev/sdb",
"ceph orch ls osd",
"ceph osd tree",
"ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --service_name=osd",
"touch osd_spec.yaml",
"service_type: osd service_id: SERVICE_ID placement: host_pattern: '*' # optional data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH osds_per_device: NUMBER_OF_DEVICES # optional db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH encrypted: true",
"service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: all: true paths: - /dev/sdb encrypted: true",
"service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: size: '80G' db_devices: size: '40G:' paths: - /dev/sdc",
"service_type: osd service_id: all-available-devices encrypted: \"true\" method: raw placement: host_pattern: \"*\" data_devices: all: \"true\"",
"service_type: osd service_id: osd_spec_hdd placement: host_pattern: '*' data_devices: rotational: 0 db_devices: model: Model-name limit: 2 --- service_type: osd service_id: osd_spec_ssd placement: host_pattern: '*' data_devices: model: Model-name db_devices: vendor: Vendor-name",
"service_type: osd service_id: osd_spec_node_one_to_five placement: host_pattern: 'node[1-5]' data_devices: rotational: 1 db_devices: rotational: 0 --- service_type: osd service_id: osd_spec_six_to_ten placement: host_pattern: 'node[6-10]' data_devices: model: Model-name db_devices: model: Model-name",
"service_type: osd service_id: osd_using_paths placement: hosts: - host01 - host02 data_devices: paths: - /dev/sdb db_devices: paths: - /dev/sdc wal_devices: paths: - /dev/sdd",
"service_type: osd service_id: multiple_osds placement: hosts: - host01 - host02 osds_per_device: 4 data_devices: paths: - /dev/sdb",
"service_type: osd service_id: SERVICE_ID placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH",
"service_type: osd service_id: osd_spec placement: hosts: - machine1 data_devices: paths: - /dev/vg_hdd/lv_hdd db_devices: paths: - /dev/vg_nvme/lv_nvme",
"service_type: osd service_id: OSD_BY_ID_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH",
"service_type: osd service_id: osd_by_id_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 db_devices: paths: - /dev/disk/by-id/nvme-nvme.1b36-31323334-51454d55204e564d65204374726c-00000001",
"service_type: osd service_id: OSD_BY_PATH_HOSTNAME placement: hosts: - HOSTNAME data_devices: # optional model: DISK_MODEL_NAME # optional paths: - / DEVICE_PATH db_devices: # optional size: # optional all: true # optional paths: - / DEVICE_PATH",
"service_type: osd service_id: osd_by_path_host01 placement: hosts: - host01 data_devices: paths: - /dev/disk/by-path/pci-0000:0d:00.0-scsi-0:0:0:4 db_devices: paths: - /dev/disk/by-path/pci-0000:00:02.0-nvme-1",
"cephadm shell --mount osd_spec.yaml:/var/lib/ceph/osd/osd_spec.yaml",
"cd /var/lib/ceph/osd/",
"ceph orch apply -i osd_spec.yaml --dry-run",
"ceph orch apply -i FILE_NAME .yml",
"ceph orch apply -i osd_spec.yaml",
"ceph orch ls osd",
"ceph osd tree",
"cephadm shell",
"ceph osd tree",
"ceph orch osd rm OSD_ID [--replace] [--force] --zap",
"ceph orch osd rm 0 --zap",
"ceph orch osd rm OSD_ID OSD_ID --zap",
"ceph orch osd rm 2 5 --zap",
"ceph orch osd rm status OSD HOST STATE PGS REPLACE FORCE ZAP DRAIN STARTED AT 9 host01 done, waiting for purge 0 False False True 2023-06-06 17:50:50.525690 10 host03 done, waiting for purge 0 False False True 2023-06-06 17:49:38.731533 11 host02 done, waiting for purge 0 False False True 2023-06-06 17:48:36.641105",
"ceph osd tree",
"cephadm shell",
"ceph osd metadata -f plain | grep device_paths \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdi=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sde=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:1,sdf=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdg=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdh=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdd=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:2,sdk=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:2\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdl=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdj=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", \"device_paths\": \"sdc=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:0:0:3,sdm=/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:3\", [.. output omitted ..]",
"ceph osd tree",
"ceph orch osd rm OSD_ID --replace [--force]",
"ceph orch osd rm 0 --replace",
"ceph orch osd rm status",
"ceph orch pause ceph orch status Backend: cephadm Available: Yes Paused: Yes",
"ceph orch device zap node.example.com /dev/sdi --force zap successful for /dev/sdi on node.example.com ceph orch device zap node.example.com /dev/sdf --force zap successful for /dev/sdf on node.example.com",
"ceph orch resume",
"ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.77112 root default -3 0.77112 host node 0 hdd 0.09639 osd.0 up 1.00000 1.00000 1 hdd 0.09639 osd.1 up 1.00000 1.00000 2 hdd 0.09639 osd.2 up 1.00000 1.00000 3 hdd 0.09639 osd.3 up 1.00000 1.00000 4 hdd 0.09639 osd.4 up 1.00000 1.00000 5 hdd 0.09639 osd.5 up 1.00000 1.00000 6 hdd 0.09639 osd.6 up 1.00000 1.00000 7 hdd 0.09639 osd.7 up 1.00000 1.00000 [.. output omitted ..]",
"ceph osd tree",
"ceph osd metadata 0 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\", ceph osd metadata 1 | grep bluefs_db_devices \"bluefs_db_devices\": \"nvme0n1\",",
"cephadm shell",
"ceph orch osd rm OSD_ID [--replace]",
"ceph orch osd rm 8 --replace Scheduled OSD(s) for removal",
"ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.32297 root default -9 0.05177 host host10 3 hdd 0.01520 osd.3 up 1.00000 1.00000 13 hdd 0.02489 osd.13 up 1.00000 1.00000 17 hdd 0.01169 osd.17 up 1.00000 1.00000 -13 0.05177 host host11 2 hdd 0.01520 osd.2 up 1.00000 1.00000 15 hdd 0.02489 osd.15 up 1.00000 1.00000 19 hdd 0.01169 osd.19 up 1.00000 1.00000 -7 0.05835 host host12 20 hdd 0.01459 osd.20 up 1.00000 1.00000 21 hdd 0.01459 osd.21 up 1.00000 1.00000 22 hdd 0.01459 osd.22 up 1.00000 1.00000 23 hdd 0.01459 osd.23 up 1.00000 1.00000 -5 0.03827 host host04 1 hdd 0.01169 osd.1 up 1.00000 1.00000 6 hdd 0.01129 osd.6 up 1.00000 1.00000 7 hdd 0.00749 osd.7 up 1.00000 1.00000 9 hdd 0.00780 osd.9 up 1.00000 1.00000 -3 0.03816 host host05 0 hdd 0.01169 osd.0 up 1.00000 1.00000 8 hdd 0.01129 osd.8 destroyed 0 1.00000 12 hdd 0.00749 osd.12 up 1.00000 1.00000 16 hdd 0.00769 osd.16 up 1.00000 1.00000 -15 0.04237 host host06 5 hdd 0.01239 osd.5 up 1.00000 1.00000 10 hdd 0.01540 osd.10 up 1.00000 1.00000 11 hdd 0.01459 osd.11 up 1.00000 1.00000 -11 0.04227 host host07 4 hdd 0.01239 osd.4 up 1.00000 1.00000 14 hdd 0.01529 osd.14 up 1.00000 1.00000 18 hdd 0.01459 osd.18 up 1.00000 1.00000",
"ceph-volume lvm zap --osd-id OSD_ID",
"ceph-volume lvm zap --osd-id 8 Zapping: /dev/vg1/data-lv2 Closing encrypted path /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/sbin/cryptsetup remove /dev/mapper/l4D6ql-Prji-IzH4-dfhF-xzuf-5ETl-jNRcXC Running command: /usr/bin/dd if=/dev/zero of=/dev/vg1/data-lv2 bs=1M count=10 conv=fsync stderr: 10+0 records in 10+0 records out stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.034742 s, 302 MB/s Zapping successful for OSD: 8",
"ceph-volume lvm list",
"cat osd.yml service_type: osd service_id: osd_service placement: hosts: - host03 data_devices: paths: - /dev/vg1/data-lv2 db_devices: paths: - /dev/vg1/db-lv1",
"ceph orch apply -i osd.yml Scheduled osd.osd_service update",
"ceph -s ceph osd tree",
"lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 19G 0 part ├─rhel-root 253:0 0 17G 0 lvm / └─rhel-swap 253:1 0 2G 0 lvm [SWAP] sdb 8:16 0 10G 0 disk └─ceph--5726d3e9--4fdb--4eda--b56a--3e0df88d663f-osd--block--3ceb89ec--87ef--46b4--99c6--2a56bac09ff0 253:2 0 10G 0 lvm sdc 8:32 0 10G 0 disk └─ceph--d7c9ab50--f5c0--4be0--a8fd--e0313115f65c-osd--block--37c370df--1263--487f--a476--08e28bdbcd3c 253:4 0 10G 0 lvm sdd 8:48 0 10G 0 disk ├─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--31b20150--4cbc--4c2c--9c8f--6f624f3bfd89 253:7 0 2.5G 0 lvm └─ceph--1774f992--44f9--4e78--be7b--b403057cf5c3-osd--db--1bee5101--dbab--4155--a02c--e5a747d38a56 253:9 0 2.5G 0 lvm sde 8:64 0 10G 0 disk sdf 8:80 0 10G 0 disk └─ceph--412ee99b--4303--4199--930a--0d976e1599a2-osd--block--3a99af02--7c73--4236--9879--1fad1fe6203d 253:6 0 10G 0 lvm sdg 8:96 0 10G 0 disk └─ceph--316ca066--aeb6--46e1--8c57--f12f279467b4-osd--block--58475365--51e7--42f2--9681--e0c921947ae6 253:8 0 10G 0 lvm sdh 8:112 0 10G 0 disk ├─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--0dfe6eca--ba58--438a--9510--d96e6814d853 253:3 0 5G 0 lvm └─ceph--d7064874--66cb--4a77--a7c2--8aa0b0125c3c-osd--db--26b70c30--8817--45de--8843--4c0932ad2429 253:5 0 5G 0 lvm sr0",
"cephadm shell",
"ceph-volume lvm list /dev/sdh ====== osd.2 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 block device /dev/ceph-5726d3e9-4fdb-4eda-b56a-3e0df88d663f/osd-block-3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 block uuid GkWLoo-f0jd-Apj2-Zmwj-ce0h-OY6J-UuW8aD cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-0dfe6eca-ba58-438a-9510-d96e6814d853 db uuid 6gSPoc-L39h-afN3-rDl6-kozT-AX9S-XR20xM encrypted 0 osd fsid 3ceb89ec-87ef-46b4-99c6-2a56bac09ff0 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh ====== osd.5 ======= [db] /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 block device /dev/ceph-d7c9ab50-f5c0-4be0-a8fd-e0313115f65c/osd-block-37c370df-1263-487f-a476-08e28bdbcd3c block uuid Eay3I7-fcz5-AWvp-kRcI-mJaH-n03V-Zr0wmJ cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-d7064874-66cb-4a77-a7c2-8aa0b0125c3c/osd-db-26b70c30-8817-45de-8843-4c0932ad2429 db uuid mwSohP-u72r-DHcT-BPka-piwA-lSwx-w24N0M encrypted 0 osd fsid 37c370df-1263-487f-a476-08e28bdbcd3c osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sdh",
"cat osds.yml service_type: osd service_id: non-colocated unmanaged: true placement: host_pattern: 'ceph*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sdh",
"ceph orch apply -i osds.yml Scheduled osd.non-colocated update",
"ceph orch ls NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 9m ago 4d count:1 crash 3/4 4d ago 4d * grafana ?:3000 1/1 9m ago 4d count:1 mgr 1/2 4d ago 4d count:2 mon 3/5 4d ago 4d count:5 node-exporter ?:9100 3/4 4d ago 4d * osd.non-colocated 8 4d ago 5s <unmanaged> prometheus ?:9095 1/1 9m ago 4d count:1",
"ceph orch osd rm 2 5 --zap --replace Scheduled OSD(s) for removal",
"ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.1 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 996 KiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.0 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 destroyed osd.5",
"cat osds.yml service_type: osd service_id: non-colocated unmanaged: false placement: host_pattern: 'ceph01*' data_devices: paths: - /dev/sdb - /dev/sdc - /dev/sdf - /dev/sdg db_devices: paths: - /dev/sdd - /dev/sde",
"ceph orch apply -i osds.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any of these conditions change, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ +---------+-------+-------+----------+----------+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+-------+-------+----------+----------+-----+ |osd |non-colocated |host02 |/dev/sdb |/dev/sde |- | |osd |non-colocated |host02 |/dev/sdc |/dev/sde |- | +---------+-------+-------+----------+----------+-----+",
"ceph orch apply -i osds.yml Scheduled osd.non-colocated update",
"ceph osd df tree | egrep -i \"ID|host02|osd.2|osd.5\" ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME -5 0.04877 - 55 GiB 15 GiB 4.5 MiB 0 B 60 MiB 40 GiB 27.27 1.17 - host host02 2 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.2 5 hdd 0.01219 1.00000 15 GiB 5.0 GiB 1.1 MiB 0 B 15 MiB 10 GiB 33.33 1.43 0 up osd.5",
"ceph-volume lvm list /dev/sde ====== osd.2 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 block device /dev/ceph-a4afcb78-c804-4daf-b78f-3c7ad1ed0379/osd-block-564b3d2f-0f85-4289-899a-9f98a2641979 block uuid ITPVPa-CCQ5-BbFa-FZCn-FeYt-c5N4-ssdU41 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-1998a02e-5e67-42a9-b057-e02c22bbf461 db uuid HF1bYb-fTK7-0dcB-CHzW-xvNn-dCym-KKdU5e encrypted 0 osd fsid 564b3d2f-0f85-4289-899a-9f98a2641979 osd id 2 osdspec affinity non-colocated type db vdo 0 devices /dev/sde ====== osd.5 ======= [db] /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd block device /dev/ceph-b37c8310-77f9-4163-964b-f17b4c29c537/osd-block-b42a4f1f-8e19-4416-a874-6ff5d305d97f block uuid 0LuPoz-ao7S-UL2t-BDIs-C9pl-ct8J-xh5ep4 cephx lockbox secret cluster fsid fa0bd9dc-e4c4-11ed-8db4-001a4a00046e cluster name ceph crush device class db device /dev/ceph-15ce813a-8a4c-46d9-ad99-7e0845baf15e/osd-db-6c154191-846d-4e63-8c57-fc4b99e182bd db uuid SvmXms-iWkj-MTG7-VnJj-r5Mo-Moiw-MsbqVD encrypted 0 osd fsid b42a4f1f-8e19-4416-a874-6ff5d305d97f osd id 5 osdspec affinity non-colocated type db vdo 0 devices /dev/sde",
"cephadm shell",
"ceph osd tree",
"ceph orch osd rm stop OSD_ID",
"ceph orch osd rm stop 0",
"ceph orch osd rm status",
"ceph osd tree",
"cephadm shell",
"ceph cephadm osd activate HOSTNAME",
"ceph cephadm osd activate host03",
"ceph orch ls",
"ceph orch ps --service_name= SERVICE_NAME",
"ceph orch ps --service_name=osd",
"ceph -w"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/operations_guide/management-of-osds-using-the-ceph-orchestrator |
Chapter 35. Deployment and Tools | Chapter 35. Deployment and Tools systemd component, BZ# 1178848 The systemd service cannot set cgroup properties on cgroup trees that are mounted as read-only. Consequently, the following error message can ocasionally appear in the logs: You can ignore this problem, as it should not have any significant effect on you system. systemd component, BZ# 978955 When attempting to start, stop, or restart a service or unit using the systemctl [start|stop|restart] NAME command, no message is displayed to inform the user whether the action has been successful. subscription-manager component, BZ# 1166333 The Assamese (as-IN), Punjabi (pa-IN), and Korean (ko-KR) translations of subscription-manager 's user interface are incomplete. As a consequence, users of subscription-manager running in one these locales may see labels in English rather than the configured language. systemtap component, BZ#1184374 Certain functions in the kernel are not probed as expected. To work around this issue, try to probe by a statement or by a related function. systemtap component, BZ#1183038 Certain parameters or functions cannot be accessible within function probes. As a consequence, the USDparameter accesses can be rejected. To work around this issue, activate the systemtap prologue-searching heuristics. | [
"Failed to reset devices.list on /machine.slice: Invalid argument"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/known-issues-deployment_and_tools |
Chapter 5. address | Chapter 5. address This chapter describes the commands under the address command. 5.1. address group create Create a new Address Group Usage: Table 5.1. Positional arguments Value Summary <name> New address group name Table 5.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> New address group description --address <ip-address> Ip address or cidr (repeat option to set multiple addresses) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 5.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 5.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.2. address group delete Delete address group(s) Usage: Table 5.7. Positional arguments Value Summary <address-group> Address group(s) to delete (name or id) Table 5.8. Command arguments Value Summary -h, --help Show this help message and exit 5.3. address group list List address groups Usage: Table 5.9. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List only address groups of given name in output --project <project> List address groups according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 5.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 5.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 5.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.4. address group set Set address group properties Usage: Table 5.14. Positional arguments Value Summary <address-group> Address group to modify (name or id) Table 5.15. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set address group name --description <description> Set address group description --address <ip-address> Ip address or cidr (repeat option to set multiple addresses) 5.5. address group show Display address group details Usage: Table 5.16. Positional arguments Value Summary <address-group> Address group to display (name or id) Table 5.17. Command arguments Value Summary -h, --help Show this help message and exit Table 5.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 5.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.6. address group unset Unset address group properties Usage: Table 5.22. Positional arguments Value Summary <address-group> Address group to modify (name or id) Table 5.23. Command arguments Value Summary -h, --help Show this help message and exit --address <ip-address> Ip address or cidr (repeat option to unset multiple addresses) 5.7. address scope create Create a new Address Scope Usage: Table 5.24. Positional arguments Value Summary <name> New address scope name Table 5.25. Command arguments Value Summary -h, --help Show this help message and exit --ip-version {4,6} Ip version (default is 4) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share Share the address scope between projects --no-share Do not share the address scope between projects (default) Table 5.26. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 5.27. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.28. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.29. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.8. address scope delete Delete address scope(s) Usage: Table 5.30. Positional arguments Value Summary <address-scope> Address scope(s) to delete (name or id) Table 5.31. Command arguments Value Summary -h, --help Show this help message and exit 5.9. address scope list List address scopes Usage: Table 5.32. Command arguments Value Summary -h, --help Show this help message and exit --name <name> List only address scopes of given name in output --ip-version <ip-version> List address scopes of given ip version networks (4 or 6) --project <project> List address scopes according to their project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --share List address scopes shared between projects --no-share List address scopes not shared between projects Table 5.33. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 5.34. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 5.35. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.36. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 5.10. address scope set Set address scope properties Usage: Table 5.37. Positional arguments Value Summary <address-scope> Address scope to modify (name or id) Table 5.38. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Set address scope name --share Share the address scope between projects --no-share Do not share the address scope between projects 5.11. address scope show Display address scope details Usage: Table 5.39. Positional arguments Value Summary <address-scope> Address scope to display (name or id) Table 5.40. Command arguments Value Summary -h, --help Show this help message and exit Table 5.41. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 5.42. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 5.43. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 5.44. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack address group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--address <ip-address>] [--project <project>] [--project-domain <project-domain>] <name>",
"openstack address group delete [-h] <address-group> [<address-group> ...]",
"openstack address group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--project <project>] [--project-domain <project-domain>]",
"openstack address group set [-h] [--name <name>] [--description <description>] [--address <ip-address>] <address-group>",
"openstack address group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <address-group>",
"openstack address group unset [-h] [--address <ip-address>] <address-group>",
"openstack address scope create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--ip-version {4,6}] [--project <project>] [--project-domain <project-domain>] [--share | --no-share] <name>",
"openstack address scope delete [-h] <address-scope> [<address-scope> ...]",
"openstack address scope list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--ip-version <ip-version>] [--project <project>] [--project-domain <project-domain>] [--share | --no-share]",
"openstack address scope set [-h] [--name <name>] [--share | --no-share] <address-scope>",
"openstack address scope show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <address-scope>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/address |
25.3. Setting up the Challenge-Handshake Authentication Protocol | 25.3. Setting up the Challenge-Handshake Authentication Protocol After configuring an ACL and creating an iSCSI initiator, set up the Challenge-Handshake Authentication Protocol (CHAP). For more information on configuring an ACL and creating an iSCSI initiator, see Section 25.1.6, "Configuring ACLs" and Section 25.2, "Creating an iSCSI Initiator" . The CHAP allows the user to protect the target with a password. The initiator must be aware of this password to be able to connect to the target. Procedure 25.8. Setting up the CHAP for target Set attribute authentication: Set userid and password: Procedure 25.9. Setting up the CHAP for initiator Edit the iscsid.conf file: Enable the CHAP authentication in the iscsid.conf file: By default, the node.session.auth.authmethod option is set to None . Add target user name and password in the iscsid.conf file: Restart the iscsid service: For more information, see the targetcli and iscsiadm man pages. | [
"/iscsi/iqn.20...mple:444/tpg1> set attribute authentication=1 Parameter authentication is now '1'.",
"/iscsi/iqn.20...mple:444/tpg1> set auth userid= redhat Parameter userid is now 'redhat'. /iscsi/iqn.20...mple:444/tpg1> set auth password= redhat_passwd Parameter password is now 'redhat_passwd'.",
"vi /etc/iscsi/iscsid.conf node.session.auth.authmethod = CHAP",
"node.session.auth.username = redhat node.session.auth.password = redhat_passwd",
"systemctl restart iscsid.service"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/osm-setting-up-the-challenge-handshake-authentication-protocol |
Chapter 4. Creating ModSecurity rules | Chapter 4. Creating ModSecurity rules ModSecurity primarily functions based on custom user-defined rules. These rules determine the types of security checks that ModSecurity performs. 4.1. ModSecurity rules in the Apache request cycle You can apply rules to any of the five ModSecurity processing phases of the Apache request cycle: Request headers Apply a ModSecurity rule to this phase by specifying a REQUEST_HEADERS variable in the rule syntax. Request body Apply a ModSecurity rule to this phase by specifying a REQUEST_BODY variable in the rule syntax. Response headers Apply a ModSecurity rule to this phase by specifying a RESPONSE_HEADERS variable in the rule syntax. Response body Apply a ModSecurity rule to this phase by specifying a RESPONSE_BODY variable in the rule syntax. Logging Apply a ModSecurity rule to this phase by specifying a LOGGING variable in the rule syntax. Additional resources ModSecurity Reference Manual: Processing Phases 4.2. Structure of ModSecurity rules A ModSecurity rule typically consists of four main parts: A configuration directive One or more variables One or more operators One or more actions 4.3. ModSecurity configuration directives A ModSecurity rule starts with a configuration directive. The configuration directives for ModSecurity are similar to the Apache HTTP Server directives. You can use most ModSecurity directives within the various Apache scope directives. However, you may only use some ModSecurity directives once in the main configuration file. You must store these rules and the core rule files outside of the httpd.conf file. You can use Apache Include directives to call these rules. This facilitates the upgrade and migration of the rules. Additional resources ModSecurity Reference Manual: Configuration Directives 4.4. Example of a simple ModSecurity rule You can define the following simple ModSecurity rule, for example, to check if the URI portion of a request equals a specific lowercase value: SecRule REQUEST_URI "@streq /index.php" "id:1,phase:1,t:lowercase,deny" The preceding ModSecurity rule consists of the following compoonents: SecRule A configuration directive that creates a rule to analyze the specified variables by using the specified operator Note Most ModSecurity rules use this configuration directive. REQUEST_URI A variable that holds the full request URL including the query string data "@streq /index.php" An operator where @streq checks for string values that are equal to /index.php "id:1,phase:1,t:lowercase,deny" Actions or transformations that the rule performs Note The rule performs the lowercase action first before the rule implements the preceding operator instruction. Based on the preceding example, during phase 1 of the Apache request cycle, the rule obtains the URI portion of the HTTP request and transforms the value to lowercase. The rule then checks if the transformed value equals /index.php . If the value does equal /index.php , ModSecurity denies the request and does not process any further rules. 4.5. Example of a complex ModSecurity rule You can define the following complex ModSecurity rule, for example, to check if a request has changed history: SecRule REQUEST_URI|REQUEST_BODY|REQUEST_HEADERS_NAMES|REQUEST_HEADERS "history.pushstate|history.replacestate" "phase:4,deny,log,msg:'history-based attacks detected'" The preceding ModSecurity rule consists of the following components: SecRule A configuration directive that creates a rule to analyze the specified variables by using the specified operator Note Most ModSecurity rules use this configuration directive. `REQUEST_URI|REQUEST_BODY|REQUEST_HEADERS_NAMES|REQUEST_HEADERS ` A pipe-separated list of variables that define different parts of the request that the rule checks "history.pushstate|history.replacestate" A pipe-separated pair of operators that check for the JavaScript history.pushstate() and history.replacestate() methods "phase:4,deny,log,msg:'history-based attacks detected'" Actions or transformations that the rule performs if the specified operator values are found Based on the preceding example, during phase 4 of the Apache request cycle, the rule checks different parts of the request cycle for history.pushstate() and history.replacestate() methods. If the rule finds these methods in the request URL string, request body, request header names, or request headers, the rule performs the following actions: deny Stops the rule processing and intercepts the transaction log Logs a successful match of the rule to the Apache error log file and the ModSecurity audit log msg Outputs a message defined as history-based attacks detected with the log 4.6. Additional resources (or steps) ModSecurity Reference Manual: Actions ModSecurity Reference Manual: Configuration Directives ModSecurity Reference Manual: Operators ModSecurity Reference Manual: Transformation functions ModSecurity Reference Manual: Variables | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_modsecurity_guide/rules_making |
Chapter 4. Known Issues | Chapter 4. Known Issues 4.1. Reboot requirement detection The reboot requirement detection is not reliable on RHEL 8.5 and later on platform ppc64le. A failed reboot requirement detection can lead to unnecessary reboots at the end of a playbook which calls the preconfiguration roles. Role parameters are available for avoiding reboots, and playbooks can be extended to unconditionally reboot a system. See bug 2166444 for more information. 4.2. RHEL 9 packages for SAP HANA On RHEL 9, some packages are not getting installed and some are installed which are not required for SAP HANA. SAP note 3108316 has been published with some updates which were not known at the time this version of the RHEL System Roles for SAP had been finalized. The additional package that has to be installed for SAP HANA is compat-openssl11 . See this comment for instructions on how to quickly define a list of packages to be installed. 4.3. Extended check (Assert) function Be careful when using the extended check (=assert) function of the preconfigure roles. The preconfigure roles can run in an assert mode, in which case they do not modify managed nodes but report the compliance of a node with the applicable SAP notes. When using the same control node also for modifying the system configuration, by running the preconfigure roles in normal mode, extra caution needs to be applied to ensure that a "normal" playbook is not accidentally used for checking the system configuration. It is strongly recommended to only run the roles on production systems after testing them on test and QA systems first. 4.4. DNS name resolution Role sap_general_preconfigure fails if the DNS domain is not set on the managed node. In case there is no DNS domain set on the managed node, which is typically the case on cloud systems, the role sap_general_preconfigure will fail in task Verify that the DNS domain is set . To avoid this, set the role variable sap_domain in the vars section of your playbook, in an inventory file for the managed node or run the ansible-playbook command with parameter -e "sap_domain=example.com" (replace example.com by your own DNS domain name). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/red_hat_enterprise_linux_system_roles_for_sap/known_issues |
Chapter 16. Guest Virtual Machine Disk Access with Offline Tools | Chapter 16. Guest Virtual Machine Disk Access with Offline Tools 16.1. Introduction Red Hat Enterprise Linux 6 comes with tools to access, edit and create host physical machine disks or other disk images. There are several uses for these tools, including: Viewing or downloading files located on a host physical machine disk. Editing or uploading files onto a host physical machine disk. Reading or writing host physical machine configuration. Reading or writing the Windows Registry in Windows host physical machines. Preparing new disk images containing files, directories, file systems, partitions, logical volumes and other options. Rescuing and repairing host physical machines that fail to boot or those that need boot configuration changes. Monitoring disk usage of host physical machines. Auditing compliance of host physical machines, for example to organizational security standards. Deploying host physical machines by cloning and modifying templates. Reading CD and DVD ISO and floppy disk images. Warning You must never use these tools to write to a host physical machine or disk image which is attached to a running virtual machine, not even to open such a disk image in write mode. Doing so will result in disk corruption of the guest virtual machine. The tools try to prevent you from doing this, however do not catch all cases. If there is any suspicion that a guest virtual machine might be running, it is strongly recommended that the tools not be used, or at least always use the tools in read-only mode. Note Some virtualization commands in Red Hat Enterprise Linux 6 allow you to specify a remote libvirt connection. For example: However, libguestfs in Red Hat Enterprise Linux 6 cannot access remote guests, and commands using remote URLs like this do not work as expected. This affects the following Red Hat Enterprise Linux 6 commands: guestfish guestmount virt-alignment-scan virt-cat virt-copy-in virt-copy-out virt-df virt-edit virt-filesystems virt-inspector virt-inspector2 virt-list-filesystems virt-list-partitions virt-ls virt-rescue virt-sysprep virt-tar virt-tar-in virt-tar-out virt-win-reg | [
"virt-df -c qemu://remote/system -d Guest"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/chap-virtualization_administration_guide-guest_disks_libguestfs |
14.3. Problems After Installation | 14.3. Problems After Installation 14.3.1. Trouble With the Graphical Boot Sequence After you finish the installation and reboot your system for the first time, it is possible that the system stops responding during the graphical boot sequence, requiring a reset. In this case, the boot loader is displayed successfully, but selecting any entry and attempting to boot the system results in a halt. This usually means a problem with the graphical boot sequence; to solve this issue, you must disable graphical boot. To do this, temporarily alter the setting at boot time before changing it permanently. Procedure 14.3. Disabling Graphical Boot Temporarily Start your computer and wait until the boot loader menu appears. If you set your boot loader timeout period to 0, hold down the Esc key to access it. When the boot loader menu appears, use your cursor keys to highlight the entry you want to boot and press the e key to edit this entry's options. In the list of options, find the kernel line - that is, the line beginning with the keyword linux . On this line, locate the rhgb option and delete it. The option might not be immediately visible; use the cursor keys to scroll up and down. Press F10 or Ctrl + X to boot your system with the edited options. If the system started successfully, you can log in normally. Then you will need to disable the graphical boot permanently - otherwise you will have to perform the procedure every time the system boots. To permanently change boot options, do the following. Procedure 14.4. Disabling Graphical Boot Permanently Log in to the root account using the su - command: Use the grubby tool to find the default GRUB2 kernel: Use the grubby tool to remove the rhgb boot option from the default kernel, identified in the last step, in your GRUB2 configuration. For example: After you finish this procedure, you can reboot your computer. Red Hat Enterprise Linux will not use the graphical boot sequence any more. If you want to enable graphical boot in the future, follow the same procedure, replacing the --remove-args="rhgb" parameter with the --args="rhgb" paramter. This will restore the rhgb boot option to the default kernel in your GRUB2 configuration. See the Red Hat Enterprise Linux 7 System Administrator's Guide for more information about working with the GRUB2 boot loader. 14.3.2. Booting into a Graphical Environment If you have installed the X Window System but are not seeing a graphical desktop environment once you log into your system, you can start it manually using the startx command. Note, however, that this is just a one-time fix and does not change the log in process for future log ins. To set up your system so that you can log in at a graphical login screen, you must change the default systemd target to graphical.target . When you are finished, reboot the computer. You will presented with a graphical login prompt after the system restarts. Procedure 14.5. Setting Graphical Login as Default Open a shell prompt. If you are in your user account, become root by typing the su - command. Change the default target to graphical.target . To do this, execute the following command: Graphical login is now enabled by default - you will be presented with a graphical login prompt after the reboot. If you want to reverse this change and keep using the text-based login prompt, execute the following command as root : For more information about targets in systemd , see the Red Hat Enterprise Linux 7 System Administrator's Guide . 14.3.3. No Graphical User Interface Present If you are having trouble getting X (the X Window System ) to start, it is possible that it has not been installed. Some of the preset base environments you can select during the installation, such as Minimal install or Web Server , do not include a graphical interface - it has to be installed manually. If you want X , you can install the necessary packages afterwards. See the Knowledgebase article at https://access.redhat.com/site/solutions/5238 for information on installing a graphical desktop environment. 14.3.4. X Server Crashing After User Logs In If you are having trouble with the X server crashing when a user logs in, one or more of your file systems can be full or nearly full. To verify that this is the problem you are experiencing, execute the following command: The output will help you diagnose which partition is full - in most cases, the problem will be on the /home partition. The following is a sample output of the df command: In the above example, you can see that the /home partition is full, which causes the crash. You can make some room on the partition by removing unneeded files. After you free up some disk space, start X using the startx command. For additional information about df and an explanation of the options available (such as the -h option used in this example), see the df(1) man page. 14.3.5. Is Your System Displaying Signal 11 Errors? A signal 11 error, commonly known as a segmentation fault , means that a program accessed a memory location that was not assigned to it. A signal 11 error can occur due to a bug in one of the software programs that is installed, or faulty hardware. If you receive a fatal signal 11 error during the installation, first make sure you are using the most recent installation images, and let Anaconda verify them to make sure they are not corrupted. Bad installation media (such as an improperly burned or scratched optical disk) are a common cause of signal 11 errors. Verifying the integrity of the installation media is recommended before every installation. For information about obtaining the most recent installation media, see Chapter 2, Downloading Red Hat Enterprise Linux . To perform a media check before the installation starts, append the rd.live.check boot option at the boot menu. See Section 23.2.2, "Verifying Boot Media" for details. Other possible causes are beyond this document's scope. Consult your hardware manufacturer's documentation for more information. 14.3.6. Unable to IPL from Network Storage Space (*NWSSTG) If you are experiencing difficulties when trying to IPL from Network Storage Space (*NWSSTG), in most cases the reason is a missing PReP partition. In this case, you must reinstall the system and make sure to create this partition during the partitioning phase or in the Kickstart file. 14.3.7. The GRUB2 next_entry variable can behave unexpectedly in a virtualized environment IBM Power System users booting their virtual environment with SLOF firmware must manually unset the next_entry grub environment variable after a system reboot. The SLOF firmware does not support block writes at boot time by design thus the bootloader is unable to clear this variable at boot time. | [
"su -",
"grubby --default-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.ppc64",
"grubby --remove-args=\"rhgb\" --update-kernel /boot/vmlinuz-3.10.0-229.4.2.el7.ppc64",
"systemctl set-default graphical.target",
"systemctl set-default multi-user.target",
"df -h",
"Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_rhel-root 20G 6.0G 13G 32% / devtmpfs 1.8G 0 1.8G 0% /dev tmpfs 1.8G 2.7M 1.8G 1% /dev/shm tmpfs 1.8G 1012K 1.8G 1% /run tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup tmpfs 1.8G 2.6M 1.8G 1% /tmp /dev/sda1 976M 150M 760M 17% /boot /dev/dm-4 90G 90G 0 100% /home"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-trouble-after-ppc |
Chapter 17. Enabling OAuth 2.0 token-based access | Chapter 17. Enabling OAuth 2.0 token-based access Streams for Apache Kafka supports OAuth 2.0 for securing Kafka clusters by integrating with an OAUth 2.0 authorization server. Kafka brokers and clients both need to be configured to use OAuth 2.0. OAuth 2.0 enables standardized token-based authentication and authorization between applications, using a central authorization server to issue tokens that grant limited access to resources. You can define specific scopes for fine-grained access control. Scopes correspond to different levels of access to Kafka topics or operations within the cluster. OAuth 2.0 also supports single sign-on and integration with identity providers. 17.1. Configuring an OAuth 2.0 authorization server Before you can use OAuth 2.0 token-based access, you must configure an authorization server for integration with Streams for Apache Kafka. The steps are dependent on the chosen authorization server. Consult the product documentation for the authorization server for information on how to set up OAuth 2.0 access. Prepare the authorization server to work with Streams for Apache Kafka by defining OAUth 2.0 clients for Kafka and each Kafka client component of your application. In relation to the authorization server, the Kafka cluster and Kafka clients are both regarded as OAuth 2.0 clients. In general, configure OAuth 2.0 clients in the authorization server with the following client credentials enabled: Client ID (for example, kafka for the Kafka cluster) Client ID and secret as the authentication mechanism Note You only need to use a client ID and secret when using a non-public introspection endpoint of the authorization server. The credentials are not typically required when using public authorization server endpoints, as with fast local JWT token validation. 17.2. Using OAuth 2.0 token-based authentication Streams for Apache Kafka supports the use of OAuth 2.0 for token-based authentication. An OAuth 2.0 authorization server handles the granting of access and inquiries about access. Kafka clients authenticate to Kafka brokers. Brokers and clients communicate with the authorization server, as necessary, to obtain or validate access tokens. For a deployment of Streams for Apache Kafka, OAuth 2.0 integration provides the following support: Server-side OAuth 2.0 authentication for Kafka brokers Client-side OAuth 2.0 authentication for Kafka MirrorMaker, Kafka Connect, and the Kafka Bridge 17.2.1. Configuring OAuth 2.0 authentication on listeners To secure Kafka brokers with OAuth 2.0 authentication, configure a listener in the Kafka resource to use OAUth 2.0 authentication and a client authentication mechanism, and add further configuration depending on the authentication mechanism and type of token validation used in the authentication. Configuring listeners to use oauth authentication Specify a listener in the Kafka resource with an oauth authentication type. You can configure internal and external listeners. We recommend using OAuth 2.0 authentication together with TLS encryption ( tls: true ). Without encryption, the connection is vulnerable to network eavesdropping and unauthorized access through token theft. Example listener configuration with OAuth 2.0 authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #... Enabling SASL authentication mechanisms Use one or both of the following SASL mechanisms for clients to exchange credentials and establish authenticated sessions with Kafka. OAUTHBEARER Using the OAUTHBEARER authentication mechanism, credentials exchange uses a bearer token provided by an OAuth callback handler. Token provision can be configured to use the following methods: Client ID and secret (using the OAuth 2.0 client credentials mechanism ) Client ID and client assertion Long-lived access token or Service account token Long-lived refresh token obtained manually OAUTHBEARER is recommended as it provides a higher level of security than PLAIN , though it can only be used by Kafka clients that support the OAUTHBEARER mechanism at the protocol level. Client credentials are never shared with Kafka. PLAIN PLAIN is a simple authentication mechanism used by all Kafka client tools. Consider using PLAIN only with Kafka clients that do not support OAUTHBEARER . Using the PLAIN authentication mechanism, credentials exchange can be configured to use the following methods: Client ID and secret (using the OAuth 2.0 client credentials mechanism ) Long-lived access token Regardless of the method used, the client must provide username and password properties to Kafka. Credentials are handled centrally behind a compliant authorization server, similar to how OAUTHBEARER authentication is used. The username extraction process depends on the authorization server configuration. OAUTHBEARER is automatically enabled in the oauth listener configuration for the Kafka broker. To use the PLAIN mechanism, you must set the enablePlain property to true . In the following example, the PLAIN mechanism is enabled, and the OAUTHBEARER mechanism is disabled on a listener using the enableOauthBearer property. Example listener configuration for the PLAIN mechanism apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth enablePlain: true enableOauthBearer: false #... When you have defined the type of authentication as OAuth 2.0, you add configuration based on the type of validation, either as fast local JWT validation or token validation using an introspection endpoint. Configuring fast local JWT token validation Fast local JWT token validation involves checking a JWT token signature locally to ensure that the token meets the following criteria: Contains a typ (type) or token_type header claim value of Bearer to indicate it is an access token Is currently valid and not expired Has an issuer that matches a validIssuerURI You specify a validIssuerURI attribute when you configure the listener, so that any tokens not issued by the authorization server are rejected. The authorization server does not need to be contacted during fast local JWT token validation. You activate fast local JWT token validation by specifying a jwksEndpointUri attribute, the endpoint exposed by the OAuth 2.0 authorization server. The endpoint contains the public keys used to validate signed JWT tokens, which are sent as credentials by Kafka clients. All communication with the authorization server should be performed using TLS encryption. You can configure a certificate truststore as an OpenShift Secret in your Streams for Apache Kafka project namespace, and use the tlsTrustedCertificates property to point to the OpenShift secret containing the truststore file. You might want to configure a userNameClaim to properly extract a username from the JWT token. If required, you can use a JsonPath expression like "['user.info'].['user.id']" to retrieve the username from nested JSON attributes within a token. If you want to use Kafka ACL authorization, identify the user by their username during authentication. (The sub claim in JWT tokens is typically a unique ID, not a username.) Example configuration for fast local JWT token validation #... - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: https://<auth_server_address>/<issuer-context> 2 jwksEndpointUri: https://<auth_server_address>/<path_to_jwks_endpoint> 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert pattern: "*.crt" disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10 1 Listener type set to oauth . 2 URI of the token issuer used for authentication. 3 URI of the JWKS certificate endpoint used for local JWT validation. 4 The token claim (or key) that contains the actual username used to identify the user. Its value depends on the authorization server. If necessary, a JsonPath expression like "['user.info'].['user.id']" can be used to retrieve the username from nested JSON attributes within a token. 5 (Optional) Activates the Kafka re-authentication mechanism that enforces session expiry to the same length of time as the access token. If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. 6 (Optional) Certificates stored in X.509 format within the specified secrets for TLS connection to the authorization server. 7 (Optional) Disable TLS hostname verification. Default is false . 8 The duration the JWKS certificates are considered valid before they expire. Default is 360 seconds. If you specify a longer time, consider the risk of allowing access to revoked certificates. 9 The period between refreshes of JWKS certificates. The interval must be at least 60 seconds shorter than the expiry interval. Default is 300 seconds. 10 The minimum pause in seconds between consecutive attempts to refresh JWKS public keys. When an unknown signing key is encountered, the JWKS keys refresh is scheduled outside the regular periodic schedule with at least the specified pause since the last refresh attempt. The refreshing of keys follows the rule of exponential backoff, retrying on unsuccessful refreshes with ever increasing pause, until it reaches jwksRefreshSeconds . The default value is 1. Configuring fast local JWT token validation with OpenShift service accounts To configure the listener for OpenShift service accounts, the Kubernetes API server must be used as the authorization server. Example configuration for fast local JWT token validation using Kubernetes API server as authorization server #... - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://kubernetes.default.svc.cluster.local 1 jwksEndpointUri: https://kubernetes.default.svc.cluster.local/openid/v1/jwks 2 serverBearerTokenLocation: /var/run/secrets/kubernetes.io/serviceaccount/token 3 checkAccessTokenType: false 4 includeAcceptHeader: false 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert pattern: "*.crt" maxSecondsWithoutReauthentication: 3600 customClaimCheck: "@.['kubernetes.io'] && @.['kubernetes.io'].['namespace'] in ['myproject']" 7 1 URI of the token issuer used for authentication. Must use FQDN, including the .cluster.local extension, which may vary based on the OpenShift cluster configuration. 2 URI of the JWKS certificate endpoint used for local JWT validation. Must use FQDN, including the .cluster.local extension, which may vary based on the OpenShift cluster configuration. 3 Location to the access token used by the Kafka broker to authenticate to the Kubernetes API server in order to access the jwksEndpointUri . 4 Skip the access token type check, as the claim for this is not present in service account tokens. 5 Skip sending Accept header in HTTP requests to the JWKS endpoint, as the Kubernetes API server does not support it. 6 Trusted certificates to connect to authorization server. This should point to a manually created Secret that contains the Kubernetes API server public certificate, which is mounted to the running pods under /var/run/secrets/kubernetes.io/serviceaccount/ca.crt . You can use the following command to create the Secret: oc get cm kube-root-ca.crt -o jsonpath="{['data']['ca\.crt']}" > /tmp/ca.crt oc create secret generic oauth-server-cert --from-file=ca.crt=/tmp/ca.crt 7 (Optional) Additional constraints that JWT token has to fulfill in order to be accepted, expressed as JsonPath filter query. In this example the service account has to belong to myproject namespace in order to be allowed to authenticate. The above configuration uses the sub claim from the service account JWT token as the user ID. For example, the default service account for pods deployed in the myproject namespace has the username: system:serviceaccount:myproject:default . When configuring ACLs the general form of how to refer to the ServiceAccount user should in that case be: User:system:serviceaccount:<Namespace>:<ServiceAccount-name> Configuring token validation using an introspection endpoint Token validation using an OAuth 2.0 introspection endpoint treats a received access token as opaque. The Kafka broker sends an access token to the introspection endpoint, which responds with the token information necessary for validation. Importantly, it returns up-to-date information if the specific access token is valid, and also information about when the token expires. To configure OAuth 2.0 introspection-based validation, you specify an introspectionEndpointUri attribute rather than the jwksEndpointUri attribute specified for fast local JWT token validation. Depending on the authorization server, you typically have to specify a clientId and clientSecret , because the introspection endpoint is usually protected. Example token validation configuration using an introspection endpoint - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://<auth_server_address>/<issuer-context> introspectionEndpointUri: https://<auth_server_address>/<path_to_introspection_endpoint> 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: - secretName: oauth-server-cert pattern: "*.crt" 1 URI of the token introspection endpoint. 2 Client ID to identify the client. 3 Client Secret and client ID is used for authentication. 4 The token claim (or key) that contains the actual username used to identify the user. Its value depends on the authorization server. If necessary, a JsonPath expression like "['user.info'].['user.id']" can be used to retrieve the username from nested JSON attributes within a token. 5 (Optional) Activates the Kafka re-authentication mechanism that enforces session expiry to the same length of time as the access token. If the specified value is less than the time left for the access token to expire, then the client will have to re-authenticate before the actual token expiry. By default, the session does not expire when the access token expires, and the client does not attempt re-authentication. Authenticating brokers to the authorization server protected endpoints Usually, the certificates endpoint of the authorization server ( jwksEndpointUri ) is publicly accessible, while the introspection endpoint ( introspectionEndpointUri ) is protected. However, this may vary depending on the authorization server configuration. The Kafka broker can authenticate to the authorization server's protected endpoints in one of two ways using HTTP authentication schemes: HTTP Basic authentication uses a client ID and secret. HTTP Bearer authentication uses a bearer token. To configure HTTP Basic authentication, set the following properties: clientId clientSecret For HTTP Bearer authentication, set the following property: serverBearerTokenLocation to specify the file path on disk containing the bearer token. Including additional configuration options Specify additional settings depending on the authentication requirements and the authorization server you are using. Some of these properties apply only to certain authentication mechanisms or when used in combination with other properties. For example, when using OAUth over PLAIN , access tokens are passed as password property values with or without an USDaccessToken: prefix. If you configure a token endpoint ( tokenEndpointUri ) in the listener configuration, you need the prefix. If you don't configure a token endpoint in the listener configuration, you don't need the prefix. The Kafka broker interprets the password as a raw access token. If the password is set as the access token, the username must be set to the same principal name that the Kafka broker obtains from the access token. You can specify username extraction options in your listener using the userNameClaim , usernamePrefix , fallbackUserNameClaim , fallbackUsernamePrefix , and userInfoEndpointUri properties. The username extraction process also depends on your authorization server; in particular, how it maps client IDs to account names. Note The PLAIN mechanism does not support password grant authentication. Use either client credentials (client ID + secret) or an access token for authentication. Example optional configuration settings # ... authentication: type: oauth # ... checkIssuer: false 1 checkAudience: true 2 usernamePrefix: user- 3 fallbackUserNameClaim: client_id 4 fallbackUserNamePrefix: client-account- 5 serverBearerTokenLocation: path/to/access/token 6 validTokenType: bearer 7 userInfoEndpointUri: https://<auth_server_address>/<path_to_userinfo_endpoint> 8 enableOauthBearer: false 9 enablePlain: true 10 tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> 11 customClaimCheck: "@.custom == 'custom-value'" 12 clientAudience: audience 13 clientScope: scope 14 connectTimeoutSeconds: 60 15 readTimeoutSeconds: 60 16 httpRetries: 2 17 httpRetryPauseMs: 300 18 groupsClaim: "USD.groups" 19 groupsClaimDelimiter: "," 20 includeAcceptHeader: false 21 1 If your authorization server does not provide an iss claim, it is not possible to perform an issuer check. In this situation, set checkIssuer to false and do not specify a validIssuerUri . Default is true . 2 If your authorization server provides an aud (audience) claim, and you want to enforce an audience check, set checkAudience to true . Audience checks identify the intended recipients of tokens. As a result, the Kafka broker will reject tokens that do not have its clientId in their aud claim. Default is false . 3 The prefix used when constructing the user ID. This only takes effect if userNameClaim is configured. 4 An authorization server may not provide a single attribute to identify both regular users and clients. When a client authenticates in its own name, the server might provide a client ID . When a user authenticates using a username and password to obtain a refresh token or an access token, the server might provide a username attribute in addition to a client ID. Use this fallback option to specify the username claim (attribute) to use if a primary user ID attribute is not available. If necessary, a JsonPath expression like "['client.info'].['client.id']" can be used to retrieve the fallback username to retrieve the username from nested JSON attributes within a token. 5 In situations where fallbackUserNameClaim is applicable, it may also be necessary to prevent name collisions between the values of the username claim, and those of the fallback username claim. Consider a situation where a client called producer exists, but also a regular user called producer exists. In order to differentiate between the two, you can use this property to add a prefix to the user ID of the client. 6 The location of the access token used by the Kafka broker to authenticate to the Kubernetes API server for accessing protected endpoints. The authorization server must support OAUTHBEARER authentication. This is an alternative to specifying clientId and clientSecret , which uses PLAIN authentication. 7 (Only applicable when using introspectionEndpointUri ) Depending on the authorization server you are using, the introspection endpoint may or may not return the token type attribute, or it may contain different values. You can specify a valid token type value that the response from the introspection endpoint has to contain. 8 (Only applicable when using introspectionEndpointUri ) The authorization server may be configured or implemented in such a way to not provide any identifiable information in an introspection endpoint response. In order to obtain the user ID, you can configure the URI of the userinfo endpoint as a fallback. The userNameClaim , fallbackUserNameClaim , and fallbackUserNamePrefix settings are applied to the response of userinfo endpoint. 9 Set this to false to disable the OAUTHBEARER mechanism on the listener. At least one of PLAIN or OAUTHBEARER has to be enabled. Default is true . 10 Set to true to enable PLAIN authentication on the listener, which is supported for clients on all platforms. 11 Additional configuration for the PLAIN mechanism. If specified, clients can authenticate over PLAIN by passing an access token as the password using an USDaccessToken: prefix. For production, always use https:// urls. 12 Additional custom rules can be imposed on the JWT access token during validation by setting this to a JsonPath filter query. If the access token does not contain the necessary data, it is rejected. When using the introspectionEndpointUri , the custom check is applied to the introspection endpoint response JSON. 13 An audience parameter passed to the token endpoint. An audience is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 14 A scope parameter passed to the token endpoint. A scope is used when obtaining an access token for inter-broker authentication. It is also used in the name of a client for OAuth 2.0 over PLAIN client authentication using a clientId and secret . This only affects the ability to obtain the token, and the content of the token, depending on the authorization server. It does not affect token validation rules by the listener. 15 The connect timeout in seconds when connecting to the authorization server. The default value is 60. 16 The read timeout in seconds when connecting to the authorization server. The default value is 60. 17 The maximum number of times to retry a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the connectTimeoutSeconds and readTimeoutSeconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. 18 The time to wait before attempting another retry of a failed HTTP request to the authorization server. By default, this time is set to zero, meaning that no pause is applied. This is because many issues that cause failed requests are per-request network glitches or proxy issues that can be resolved quickly. However, if your authorization server is under stress or experiencing high traffic, you may want to set this option to a value of 100 ms or more to reduce the load on the server and increase the likelihood of successful retries. 19 A JsonPath query that is used to extract groups information from either the JWT token or the introspection endpoint response. This option is not set by default. By configuring this option, a custom authorizer can make authorization decisions based on user groups. 20 A delimiter used to parse groups information when it is returned as a single delimited string. The default value is ',' (comma). 21 Some authorization servers have issues with client sending Accept: application/json header. By setting includeAcceptHeader: false the header will not be sent. Default is true . 17.2.2. Configuring OAuth 2.0 on client applications To configure OAuth 2.0 on client applications, you must specify the following: SASL (Simple Authentication and Security Layer) security protocols SASL mechanisms A JAAS (Java Authentication and Authorization Service) module Authentication properties to access the authorization server Configuring SASL protocols Specify SASL protocols in the client configuration: SASL_SSL for authentication over TLS encrypted connections SASL_PLAINTEXT for authentication over unencrypted connections Use SASL_SSL for production and SASL_PLAINTEXT for local development only. When using SASL_SSL , additional ssl.truststore configuration is needed. The truststore configuration is required for secure connection ( https:// ) to the OAuth 2.0 authorization server. To verify the OAuth 2.0 authorization server, add the CA certificate for the authorization server to the truststore in your client configuration. You can configure a truststore in PEM or PKCS #12 format. Configuring SASL authentication mechanisms Specify SASL mechanisms in the client configuration: OAUTHBEARER for credentials exchange using a bearer token PLAIN to pass client credentials (clientId + secret) or an access token Configuring a JAAS module Specify a JAAS module that implements the SASL authentication mechanism as a sasl.jaas.config property value: org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule implements the OAUTHBEARER mechanism org.apache.kafka.common.security.plain.PlainLoginModule implements the PLAIN mechanism Note For the OAUTHBEARER mechanism, Streams for Apache Kafka provides a callback handler for clients that use Kafka Client Java libraries to enable credentials exchange. For clients in other languages, custom code may be required to obtain the access token. For the PLAIN mechanism, Streams for Apache Kafka provides server-side callbacks to enable credentials exchange. To be able to use the OAUTHBEARER mechanism, you must also add the custom io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler class as the callback handler. JaasClientOauthLoginCallbackHandler handles OAuth callbacks to the authorization server for access tokens during client login. This enables automatic token renewal, ensuring continuous authentication without user intervention. Additionally, it handles login credentials for clients using the OAuth 2.0 password grant method. Configuring authentication properties Configure the client to use credentials or access tokens for OAuth 2.0 authentication. Using client credentials Using client credentials involves configuring the client with the necessary credentials (client ID and secret, or client ID and client assertion) to obtain a valid access token from an authorization server. This is the simplest mechanism. Using access tokens Using access tokens, the client is configured with a valid long-lived access token or refresh token obtained from an authorization server. Using access tokens adds more complexity because there is an additional dependency on authorization server tools. If you are using long-lived access tokens, you may need to configure the client in the authorization server to increase the maximum lifetime of the token. The only information ever sent to Kafka is the access token. The credentials used to obtain the token are never sent to Kafka. When a client obtains an access token, no further communication with the authorization server is needed. SASL authentication properties support the following authentication methods: OAuth 2.0 client credentials Access token or Service account token Refresh token OAuth 2.0 password grant (deprecated) Add the authentication properties as JAAS configuration ( sasl.jaas.config and sasl.login.callback.handler.class ). If the client application is not configured with an access token directly, the client exchanges one of the following sets of credentials for an access token during Kafka session initiation: Client ID and secret Client ID and client assertion Client ID, refresh token, and (optionally) a secret Username and password, with client ID and (optionally) a secret Note You can also specify authentication properties as environment variables, or as Java system properties. For Java system properties, you can set them using setProperty and pass them on the command line using the -D option. Example client credentials configuration using the client secret security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ 4 oauth.client.id="<client_id>" \ 5 oauth.client.secret="<client_secret>" \ 6 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ 7 oauth.ssl.truststore.password="USDSTOREPASS" \ 8 oauth.ssl.truststore.type="PKCS12" \ 9 oauth.scope="<scope>" \ 10 oauth.audience="<audience>" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 SASL_SSL security protocol for TLS-encrypted connections. Use SASL_PLAINTEXT over unencrypted connections for local development only. 2 The SASL mechanism specified as OAUTHBEARER or PLAIN . 3 The truststore configuration for secure access to the Kafka cluster. 4 URI of the authorization server token endpoint. 5 Client ID, which is the name used when creating the client in the authorization server. 6 Client secret created when creating the client in the authorization server. 7 The location contains the public key certificate ( truststore.p12 ) for the authorization server. 8 The password for accessing the truststore. 9 The truststore type. 10 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. 11 (Optional) The audience for requesting the token from the token endpoint. An authorization server may require a client to specify the audience. Example client credentials configuration using the client assertion security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ oauth.client.assertion.location="<path_to_client_assertion_token_file>" \ 1 oauth.client.assertion.type="urn:ietf:params:oauth:client-assertion-type:jwt-bearer" \ 2 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.scope="<scope>" \ oauth.audience="<audience>" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Path to the client assertion file used for authenticating the client. This file is a private key file as an alternative to the client secret. Alternatively, use the oauth.client.assertion option to specify the client assertion value in clear text. 2 (Optional) Sometimes you may need to specify the client assertion type. In not specified, the default value is urn:ietf:params:oauth:client-assertion-type:jwt-bearer . Example password grants configuration security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ 1 oauth.client.secret="<client_secret>" \ 2 oauth.password.grant.username="<username>" \ 3 oauth.password.grant.password="<password>" \ 4 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.scope="<scope>" \ oauth.audience="<audience>" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Username for password grant authentication. OAuth password grant configuration (username and password) uses the OAuth 2.0 password grant method. To use password grants, create a user account for a client on your authorization server with limited permissions. The account should act like a service account. Use in environments where user accounts are required for authentication, but consider using a refresh token first. 4 Password for password grant authentication. Note SASL PLAIN does not support passing a username and password (password grants) using the OAuth 2.0 password grant method. Example access token configuration security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.access.token="<access_token>" ; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Long-lived access token for Kafka clients. Alternatively, oauth.access.token.location can be used to specify the file that contains the access token. Example OpenShift service account token configuration security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.access.token.location="/var/run/secrets/kubernetes.io/serviceaccount/token"; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Location to the service account token on the filesystem (assuming that the client is deployed as an OpenShift pod) Example refresh token configuration security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="<token_endpoint_url>" \ oauth.client.id="<client_id>" \ 1 oauth.client.secret="<client_secret>" \ 2 oauth.refresh.token="<refresh_token>" \ 3 oauth.ssl.truststore.location="/tmp/oauth-truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler 1 Client ID, which is the name used when creating the client in the authorization server. 2 (Optional) Client secret created when creating the client in the authorization server. 3 Long-lived refresh token for Kafka clients. SASL extensions for custom OAUTHBEARER implementations If your Kafka broker uses a custom OAUTHBEARER implementation, you may need to pass additional SASL extension options. These extensions can include attributes or information required as client context by the authorization server. The options are passed as key-value pairs and are sent to the Kafka broker when a new session is started. Pass SASL extension values using oauth.sasl.extension. as a key prefix. Example configuration to pass SASL extension values oauth.sasl.extension.key1="value1" oauth.sasl.extension.key2="value2" 17.2.3. OAuth 2.0 client authentication flows OAuth 2.0 authentication flows depend on the underlying Kafka client and Kafka broker configuration. The flows must also be supported by the authorization server used. The Kafka broker listener configuration determines how clients authenticate using an access token. The client can pass a client ID and secret to request an access token. If a listener is configured to use PLAIN authentication, the client can authenticate with a client ID and secret or username and access token. These values are passed as the username and password properties of the PLAIN mechanism. Listener configuration supports the following token validation options: You can use fast local token validation based on JWT signature checking and local token introspection, without contacting an authorization server. The authorization server provides a JWKS endpoint with public certificates that are used to validate signatures on the tokens. You can use a call to a token introspection endpoint provided by an authorization server. Each time a new Kafka broker connection is established, the broker passes the access token received from the client to the authorization server. The Kafka broker checks the response to confirm whether the token is valid. Note An authorization server might only allow the use of opaque access tokens, which means that local token validation is not possible. Kafka client credentials can also be configured for the following types of authentication: Direct local access using a previously generated long-lived access token Contact with the authorization server for a new access token to be issued (using a client ID and credentials, or a refresh token, or a username and a password) 17.2.3.1. Example client authentication flows using the SASL OAUTHBEARER mechanism You can use the following communication flows for Kafka authentication using the SASL OAUTHBEARER mechanism. Client using client ID and credentials, with broker delegating validation to authorization server The Kafka client requests an access token from the authorization server using a client ID and credentials, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server using its own client ID and secret. A Kafka client session is established if the token is valid. Client using client ID and credentials, with broker performing fast local token validation The Kafka client authenticates with the authorization server from the token endpoint, using a client ID and credentials, and optionally a refresh token. Alternatively, the client may authenticate using a username and a password. The authorization server generates a new access token. The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the access token. The Kafka broker validates the access token locally using a JWT token signature check, and local token introspection. Client using long-lived access token, with broker delegating validation to authorization server The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token by calling a token introspection endpoint on the authorization server, using its own client ID and secret. A Kafka client session is established if the token is valid. Client using long-lived access token, with broker performing fast local validation The Kafka client authenticates with the Kafka broker using the SASL OAUTHBEARER mechanism to pass the long-lived access token. The Kafka broker validates the access token locally using a JWT token signature check and local token introspection. Warning Fast local JWT token signature validation is suitable only for short-lived tokens as there is no check with the authorization server if a token has been revoked. Token expiration is written into the token, but revocation can happen at any time, so cannot be accounted for without contacting the authorization server. Any issued token would be considered valid until it expires. 17.2.3.2. Example client authentication flows using the SASL PLAIN mechanism You can use the following communication flows for Kafka authentication using the OAuth PLAIN mechanism. Client using a client ID and secret, with the broker obtaining the access token for the client The Kafka client passes a clientId as a username and a secret as a password. The Kafka broker uses a token endpoint to pass the clientId and secret to the authorization server. The authorization server returns a fresh access token or an error if the client credentials are not valid. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if the token validation is successful. If local token introspection is used, a request is not made to the authorization server. The Kafka broker validates the access token locally using a JWT token signature check. Client using a long-lived access token without a client ID and secret The Kafka client passes a username and password. The password provides the value of an access token that was obtained manually and configured before running the client. The password is passed with or without an USDaccessToken: string prefix depending on whether or not the Kafka broker listener is configured with a token endpoint for authentication. If the token endpoint is configured, the password should be prefixed by USDaccessToken: to let the broker know that the password parameter contains an access token rather than a client secret. The Kafka broker interprets the username as the account username. If the token endpoint is not configured on the Kafka broker listener (enforcing a no-client-credentials mode ), the password should provide the access token without the prefix. The Kafka broker interprets the username as the account username. In this mode, the client doesn't use a client ID and secret, and the password parameter is always interpreted as a raw access token. The Kafka broker validates the token in one of the following ways: If a token introspection endpoint is specified, the Kafka broker validates the access token by calling the endpoint on the authorization server. A session is established if token validation is successful. If local token introspection is used, there is no request made to the authorization server. Kafka broker validates the access token locally using a JWT token signature check. 17.2.4. Re-authenticating sessions Configure oauth listeners to use Kafka session re-authentication for OAuth 2.0 sessions between Kafka clients and Kafka. This mechanism enforces the expiry of an authenticated session between the client and the broker after a defined period of time. When a session expires, the client immediately starts a new session by reusing the existing connection rather than dropping it. Session re-authentication is disabled by default. To enable it, you set a time value for maxSecondsWithoutReauthentication in the oauth listener configuration. The same property is used to configure session re-authentication for OAUTHBEARER and PLAIN authentication. For an example configuration, see Section 17.2.1, "Configuring OAuth 2.0 authentication on listeners" . Session re-authentication must be supported by the Kafka client libraries used by the client. Session re-authentication can be used with fast local JWT or introspection endpoint token validation. Client re-authentication When the broker's authenticated session expires, the client must re-authenticate to the existing session by sending a new, valid access token to the broker, without dropping the connection. If token validation is successful, a new client session is started using the existing connection. If the client fails to re-authenticate, the broker will close the connection if further attempts are made to send or receive messages. Java clients that use Kafka client library 2.2 or later automatically re-authenticate if the re-authentication mechanism is enabled on the broker. Session re-authentication also applies to refresh tokens, if used. When the session expires, the client refreshes the access token by using its refresh token. The client then uses the new access token to re-authenticate to the existing session. Session expiry When session re-authentication is configured, session expiry works differently for OAUTHBEARER and PLAIN authentication. For OAUTHBEARER and PLAIN , using the client ID and secret method: The broker's authenticated session will expire at the configured maxSecondsWithoutReauthentication . The session will expire earlier if the access token expires before the configured time. For PLAIN using the long-lived access token method: The broker's authenticated session will expire at the configured maxSecondsWithoutReauthentication . Re-authentication will fail if the access token expires before the configured time. Although session re-authentication is attempted, PLAIN has no mechanism for refreshing tokens. If maxSecondsWithoutReauthentication is not configured, OAUTHBEARER and PLAIN clients can remain connected to brokers indefinitely, without needing to re-authenticate. Authenticated sessions do not end with access token expiry. However, this can be considered when configuring authorization, for example, by using keycloak authorization or installing a custom authorizer. 17.2.5. Example: Enabling OAuth 2.0 authentication This example shows how to configure client access to a Kafka cluster using OAUth 2.0 authentication. The procedures describe the configuration required to set up OAuth 2.0 authentication on Kafka listeners, Kafka Java clients, and Kafka components. 17.2.5.1. Setting up OAuth 2.0 authentication on listeners Configure Kafka listeners so that they are enabled to use OAuth 2.0 authentication using an authorization server. We advise using OAuth 2.0 over an encrypted interface through through a listener with tls: true . Plain listeners are not recommended. If the authorization server is using certificates signed by the trusted CA and matching the OAuth 2.0 server hostname, TLS connection works using the default settings. Otherwise, you may need to configure the truststore with proper certificates or disable the certificate hostname validation. For more information on the configuration of OAuth 2.0 authentication for Kafka broker listeners, see the KafkaListenerAuthenticationOAuth schema reference . Prerequisites Streams for Apache Kafka and Kafka are running An OAuth 2.0 authorization server is deployed Procedure Specify a listener in the Kafka resource with an oauth authentication type. Example listener configuration with OAuth 2.0 authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #... Configure the OAuth listener depending on the authorization server and validation type: Configuring fast local JWT token validation Configuring token validation using an introspection endpoint Including additional configuration options Apply the changes to the Kafka configuration. Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} oc get pod -w The rolling update configures the brokers to use OAuth 2.0 authentication. What to do Configure your Kafka clients to use OAuth 2.0 17.2.5.2. Setting up OAuth 2.0 on Kafka Java clients Configure Kafka producer and consumer APIs to use OAuth 2.0 for interaction with Kafka brokers. Add a callback plugin to your client pom.xml file, then configure your client for OAuth 2.0. How you configure the authentication properties depends on the authentication method you are using to access the OAuth 2.0 authorization server. In this procedure, the properties are specified in a properties file, then loaded into the client configuration. Prerequisites Streams for Apache Kafka and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Add the client library with OAuth 2.0 support to the pom.xml file for the Kafka client: <dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00012</version> </dependency> Configure the client depending on the OAuth 2.0 authentication method: Example client credentials configuration using the client secret Example password grants configuration Example access token configuration Example refresh token configuration For example, specify the properties for the authentication method in a client.properties file. Input the client properties for OAUTH 2.0 authentication into the Java client code. Example showing input of client properties Properties props = new Properties(); try (FileReader reader = new FileReader("client.properties", StandardCharsets.UTF_8)) { props.load(reader); } Verify that the Kafka client can access the Kafka brokers. 17.2.5.3. Setting up OAuth 2.0 on Kafka components This procedure describes how to set up Kafka components to use OAuth 2.0 authentication using an authorization server. You can configure OAuth 2.0 authentication for the following components: Kafka Connect Kafka MirrorMaker Kafka Bridge In this scenario, the Kafka component and the authorization server are running in the same cluster. Before you start For more information on the configuration of OAuth 2.0 authentication for Kafka components, see the KafkaClientAuthenticationOAuth schema reference . The schema reference includes examples of configuration options. Prerequisites Streams for Apache Kafka and Kafka are running An OAuth 2.0 authorization server is deployed and configured for OAuth access to Kafka brokers Kafka brokers are configured for OAuth 2.0 Procedure Create a client secret and mount it to the component as an environment variable. For example, here we are creating a client Secret for the Kafka Bridge: apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1 1 The clientSecret key must be in base64 format. Create or edit the resource for the Kafka component so that OAuth 2.0 authentication is configured for the authentication property. For OAuth 2.0 authentication, you can use the following options: Client ID and secret Client ID and client assertion Client ID and refresh token Access token Username and password TLS For example, here OAuth 2.0 is assigned to the Kafka Bridge client using a client ID and secret, and TLS: Example OAuth 2.0 authentication configuration using the client secret apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: oauth 1 tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert pattern: "*.crt" 1 Authentication type set to oauth . 2 URI of the token endpoint for authentication. 3 Certificates stored in X.509 format within the specified secrets for TLS connection to the authorization server. In this example, OAuth 2.0 is assigned to the Kafka Bridge client using a client ID and the location of a client assertion file, with TLS to connect to the authorization server: Example OAuth 2.0 authentication configuration using client assertion apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: oauth tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> clientId: kafka-bridge clientAssertionLocation: /var/run/secrets/sso/assertion 1 tlsTrustedCertificates: - secretName: oauth-server-cert pattern: "*.crt" 1 Filesystem path to the client assertion file used for authenticating the client. This file is typically added to the deployed pod by an external operator service. Alternatively, use clientAssertion to refer to a secret containing the client assertion value. Here, OAuth 2.0 is assigned to the Kafka Bridge client using a service account token: Example OAuth 2.0 authentication configuration using the service account token apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... authentication: type: oauth accessTokenLocation: /var/run/secrets/kubernetes.io/serviceaccount/token 1 1 Path to the service account token file location. Depending on how you apply OAuth 2.0 authentication, and the type of authorization server, there are additional configuration options you can use: Additional configuration options # ... spec: # ... authentication: # ... disableTlsHostnameVerification: true 1 accessTokenIsJwt: false 2 scope: any 3 audience: kafka 4 connectTimeoutSeconds: 60 5 readTimeoutSeconds: 60 6 httpRetries: 2 7 httpRetryPauseMs: 300 8 includeAcceptHeader: false 9 1 (Optional) Disable TLS hostname verification. Default is false . 2 If you are using opaque tokens, you can apply accessTokenIsJwt: false so that access tokens are not treated as JWT tokens. 3 (Optional) The scope for requesting the token from the token endpoint. An authorization server may require a client to specify the scope. In this case it is any . 4 (Optional) The audience for requesting the token from the token endpoint. An authorization server may require a client to specify the audience. In this case it is kafka . 5 (Optional) The connect timeout in seconds when connecting to the authorization server. The default value is 60. 6 (Optional) The read timeout in seconds when connecting to the authorization server. The default value is 60. 7 (Optional) The maximum number of times to retry a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the connectTimeoutSeconds and readTimeoutSeconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make the Kafka broker unresponsive. 8 (Optional) The time to wait before attempting another retry of a failed HTTP request to the authorization server. By default, this time is set to zero, meaning that no pause is applied. This is because many issues that cause failed requests are per-request network glitches or proxy issues that can be resolved quickly. However, if your authorization server is under stress or experiencing high traffic, you may want to set this option to a value of 100 ms or more to reduce the load on the server and increase the likelihood of successful retries. 9 (Optional) Some authorization servers have issues with client sending Accept: application/json header. By setting includeAcceptHeader: false the header will not be sent. Default is true . Apply the changes to the resource configuration of the component. Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} oc get pod -w The rolling updates configure the component for interaction with Kafka brokers using OAuth 2.0 authentication. 17.3. Using OAuth 2.0 token-based authorization Streams for Apache Kafka supports the use of OAuth 2.0 token-based authorization through Authorization Services , which lets you manage security policies and permissions centrally. Security policies and permissions defined in Red Hat build of Keycloak grant access to Kafka resources. Users and clients are matched against policies that permit access to perform specific actions on Kafka brokers. Kafka allows all users full access to brokers by default, but also provides the AclAuthorizer and StandardAuthorizer plugins to configure authorization based on Access Control Lists (ACLs). The ACL rules managed by these plugins are used to grant or deny access to resources based on username , and these rules are stored within the Kafka cluster itself. However, OAuth 2.0 token-based authorization with Red Hat build of Keycloak offers far greater flexibility on how you wish to implement access control to Kafka brokers. In addition, you can configure your Kafka brokers to use OAuth 2.0 authorization and ACLs. 17.3.1. Example: Enabling OAuth 2.0 authorization This example procedure shows how to configure Kafka to use OAuth 2.0 authorization using Red Hat build of Keycloak Authorization Services. To enable OAuth 2.0 authorization using Red Hat build of Keycloak, configure the Kafka resource to use keycloak authorization and specify the properties required to access the authorization server and Red Hat build of Keycloak Authorization Services. Red Hat build of Keycloak server Authorization Services REST endpoints extend token-based authentication with Red Hat build of Keycloak by applying defined security policies on a particular user, and providing a list of permissions granted on different resources for that user. Policies use roles and groups to match permissions to users. OAuth 2.0 authorization enforces permissions locally based on the received list of grants for the user from Red Hat build of Keycloak Authorization Services. A Red Hat build of Keycloak authorizer ( KeycloakAuthorizer ) is provided with Streams for Apache Kafka. The authorizer fetches a list of granted permissions from the authorization server as needed, and enforces authorization locally on Kafka, making rapid authorization decisions for each client request. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of Red Hat build of Keycloak groups , roles , clients , and users to configure access in Red Hat build of Keycloak. Typically, groups are used to match users based on organizational departments or geographical locations. And roles are used to match users based on their function. With Red Hat build of Keycloak, you can store users and groups in LDAP, whereas clients and roles cannot be stored this way. Storage and access to user data may be a factor in how you choose to configure authorization policies. Note Super users always have unconstrained access to Kafka regardless of the authorization implemented. Prerequisites Streams for Apache Kafka must be configured to use OAuth 2.0 with Red Hat build of Keycloak for token-based authentication . You use the same Red Hat build of Keycloak server endpoint when you set up authorization. OAuth 2.0 authentication must be configured with the maxSecondsWithoutReauthentication option to enable re-authentication. Procedure Access the Red Hat build of Keycloak Admin Console or use the Red Hat build of Keycloak Admin CLI to enable Authorization Services for the OAuth 2.0 client for Kafka you created when setting up OAuth 2.0 authentication. Use Authorization Services to define resources, authorization scopes, policies, and permissions for the client. Bind the permissions to users and clients by assigning them roles and groups. Configure the kafka resource to use keycloak authorization, and to be able to access the authorization server and Authorization Services. Example OAuth 2.0 authorization configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=user-1 - user-2 - CN=user-3 tlsTrustedCertificates: 7 - secretName: oauth-server-cert pattern: "*.crt" grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 grantsMaxIdleSeconds: 300 10 grantsGcPeriodSeconds: 300 11 grantsAlwaysLatest: false 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 enableMetrics: false 16 includeAcceptHeader: false 17 #... 1 Type keycloak enables Red Hat build of Keycloak authorization. 2 URI of the Red Hat build of Keycloak token endpoint. For production, always use https:// urls. When you configure token-based oauth authentication, you specify a jwksEndpointUri as the URI for local JWT validation. The hostname for the tokenEndpointUri URI must be the same. 3 The client ID of the OAuth 2.0 client definition in Red Hat build of Keycloak that has Authorization Services enabled. Typically, kafka is used as the ID. 4 (Optional) Delegate authorization to Kafka AclAuthorizer and StandardAuthorizer if access is denied by Red Hat build of Keycloak Authorization Services policies. Default is false . 5 (Optional) Disable TLS hostname verification. Default is false . 6 (Optional) Designated super users. 7 (Optional) Certificates stored in X.509 format within the specified secrets for TLS connection to the authorization server. 8 (Optional) The time between two consecutive grants refresh runs. That is the maximum time for active sessions to detect any permissions changes for the user on Red Hat build of Keycloak. The default value is 60. 9 (Optional) The number of threads to use to refresh (in parallel) the grants for the active sessions. The default value is 5. 10 (Optional) The time, in seconds, after which an idle grant in the cache can be evicted. The default value is 300. 11 (Optional) The time, in seconds, between consecutive runs of a job that cleans stale grants from the cache. The default value is 300. 12 (Optional) Controls whether the latest grants are fetched for a new session. When enabled, grants are retrieved from Red Hat build of Keycloak and cached for the user. The default value is false . 13 (Optional) The connect timeout in seconds when connecting to the Red Hat build of Keycloak token endpoint. The default value is 60. 14 (Optional) The read timeout in seconds when connecting to the Red Hat build of Keycloak token endpoint. The default value is 60. 15 (Optional) The maximum number of times to retry (without pausing) a failed HTTP request to the authorization server. The default value is 0 , meaning that no retries are performed. To use this option effectively, consider reducing the timeout times for the connectTimeoutSeconds and readTimeoutSeconds options. However, note that retries may prevent the current worker thread from being available to other requests, and if too many requests stall, it could make Kafka unresponsive. 16 (Optional) Enable or disable OAuth metrics. The default value is false . 17 (Optional) Some authorization servers have issues with client sending Accept: application/json header. By setting includeAcceptHeader: false the header will not be sent. Default is true . Apply the changes to the Kafka configuration. Check the update in the logs or by watching the pod state transitions: oc logs -f USD{POD_NAME} -c kafka oc get pod -w The rolling update configures the brokers to use OAuth 2.0 authorization. Verify the configured permissions by accessing Kafka brokers as clients or users with specific roles, ensuring they have the necessary access and do not have unauthorized access. 17.4. Setting up permissions in Red Hat build of Keycloak When using Red Hat build of Keycloak as the OAuth 2.0 authorization server, Kafka permissions are granted to user accounts or service accounts using authorization permissions. To grant permissions to access Kafka, create an OAuth client specification in Red Hat build of Keycloak that maps the authorization models of Red Hat build of Keycloak Authorization Services and Kafka. 17.4.1. Kafka and Red Hat build of Keycloak authorization models Kafka and Red Hat build of Keycloak use different authorization models. Kafka authorization model Kafka's authorization model uses resource types and operations to describe ACLs for a user. When a Kafka client performs an action on a broker, the broker uses the configured KeycloakAuthorizer to check the client's permissions, based on the action and resource type. Each resource type has a set of available permissions for operations. For example, the Topic resource type has Create and Write permissions among others. Refer to the Kafka authorization model in the Kafka documentation for the full list of resources and permissions. Red Hat build of Keycloak authorization model Red Hat build of Keycloak's authorization services model has four concepts for defining and granting permissions: Resources Scopes Policies Permissions For information on these concepts, see the guide to Authorization Services . 17.4.2. Mapping authorization models The Kafka authorization model is used as a basis for defining the Red Hat build of Keycloak roles and resources that control access to Kafka. To grant Kafka permissions to user accounts or service accounts, you first create an OAuth client specification in Red Hat build of Keycloak for the Kafka cluster. You then specify Red Hat build of Keycloak Authorization Services rules on the client. Typically, the client ID of the OAuth client that represents the Kafka cluster is kafka . The example configuration files provided with Streams for Apache Kafka use kafka as the OAuth client id. Note If you have multiple Kafka clusters, you can use a single OAuth client ( kafka ) for all of them. This gives you a single, unified space in which to define and manage authorization rules. However, you can also use different OAuth client ids (for example, my-cluster-kafka or cluster-dev-kafka ) and define authorization rules for each cluster within each client configuration. The kafka client definition must have the Authorization Enabled option enabled in the Red Hat build of Keycloak Admin Console. For information on enabling authorization services, see the guide to Authorization Services . All permissions exist within the scope of the kafka client. If you have different Kafka clusters configured with different OAuth client IDs, they each need a separate set of permissions even though they're part of the same Red Hat build of Keycloak realm. When the Kafka client uses OAUTHBEARER authentication, the Red Hat build of Keycloak authorizer ( KeycloakAuthorizer ) uses the access token of the current session to retrieve a list of grants from the Red Hat build of Keycloak server. To grant permissions, the authorizer evaluates the grants list (received and cached) from Red Hat build of Keycloak Authorization Services based on the access token owner's policies and permissions. Uploading authorization scopes for Kafka permissions An initial Red Hat build of Keycloak configuration usually involves uploading authorization scopes to create a list of all possible actions that can be performed on each Kafka resource type. This step is performed once only, before defining any permissions. You can add authorization scopes manually instead of uploading them. Authorization scopes should contain the following Kafka permissions regardless of the resource type: Create Write Read Delete Describe Alter DescribeConfigs AlterConfigs ClusterAction IdempotentWrite If you're certain you won't need a permission (for example, IdempotentWrite ), you can omit it from the list of authorization scopes. However, that permission won't be available to target on Kafka resources. Note The All permission is not supported. Resource patterns for permissions checks Resource patterns are used for pattern matching against the targeted resources when performing permission checks. The general pattern format is <resource_type>:<pattern_name> . The resource types mirror the Kafka authorization model. The pattern allows for two matching options: Exact matching (when the pattern does not end with * ) Prefix matching (when the pattern ends with * ) Example patterns for resources Additionally, the general pattern format can be prefixed by kafka-cluster:<cluster_name> followed by a comma, where <cluster_name> refers to the metadata.name in the Kafka custom resource. Example patterns for resources with cluster prefix When the kafka-cluster prefix is missing, it is assumed to be kafka-cluster:* . When defining a resource, you can associate it with a list of possible authorization scopes which are relevant to the resource. Set whatever actions make sense for the targeted resource type. Though you may add any authorization scope to any resource, only the scopes supported by the resource type are considered for access control. Policies for applying access permission Policies are used to target permissions to one or more user accounts or service accounts. Targeting can refer to: Specific user or service accounts Realm roles or client roles User groups A policy is given a unique name and can be reused to target multiple permissions to multiple resources. Permissions to grant access Use fine-grained permissions to pull together the policies, resources, and authorization scopes that grant access to users. The name of each permission should clearly define which permissions it grants to which users. For example, Dev Team B can read from topics starting with x . 17.4.3. Permissions for common Kafka operations The following examples demonstrate the user permissions required for performing common operations on Kafka. Create a topic To create a topic, the Create permission is required for the specific topic, or for Cluster:kafka-cluster . bin/kafka-topics.sh --create --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties List topics If a user has the Describe permission on a specified topic, the topic is listed. bin/kafka-topics.sh --list \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display topic details To display a topic's details, Describe and DescribeConfigs permissions are required on the topic. bin/kafka-topics.sh --describe --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Produce messages to a topic To produce messages to a topic, Describe and Write permissions are required on the topic. If the topic hasn't been created yet, and topic auto-creation is enabled, the permissions to create a topic are required. bin/kafka-console-producer.sh --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties Consume messages from a topic To consume messages from a topic, Describe and Read permissions are required on the topic. Consuming from the topic normally relies on storing the consumer offsets in a consumer group, which requires additional Describe and Read permissions on the consumer group. Two resources are needed for matching. For example: bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties Produce messages to a topic using an idempotent producer As well as the permissions for producing to a topic, an additional IdempotentWrite permission is required on the Cluster:kafka-cluster resource. Two resources are needed for matching. For example: bin/kafka-console-producer.sh --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1 List consumer groups When listing consumer groups, only the groups on which the user has the Describe permissions are returned. Alternatively, if the user has the Describe permission on the Cluster:kafka-cluster , all the consumer groups are returned. bin/kafka-consumer-groups.sh --list \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display consumer group details To display a consumer group's details, the Describe permission is required on the group and the topics associated with the group. bin/kafka-consumer-groups.sh --describe --group my-group-1 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Change topic configuration To change a topic's configuration, the Describe and Alter permissions are required on the topic. bin/kafka-topics.sh --alter --topic my-topic --partitions 2 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Display Kafka broker configuration In order to use kafka-configs.sh to get a broker's configuration, the DescribeConfigs permission is required on the Cluster:kafka-cluster . bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Change Kafka broker configuration To change a Kafka broker's configuration, DescribeConfigs and AlterConfigs permissions are required on Cluster:kafka-cluster . bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Delete a topic To delete a topic, the Describe and Delete permissions are required on the topic. bin/kafka-topics.sh --delete --topic my-topic \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties Select a lead partition To run leader selection for topic partitions, the Alter permission is required on the Cluster:kafka-cluster . bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties Reassign partitions To generate a partition reassignment file, Describe permissions are required on the topics involved. bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list "0,1" --generate \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json To execute the partition reassignment, Describe and Alter permissions are required on Cluster:kafka-cluster . Also, Describe permissions are required on the topics involved. bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties To verify partition reassignment, Describe , and AlterConfigs permissions are required on Cluster:kafka-cluster , and on each of the topics involved. bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify \ --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties 17.4.4. Example: Setting up Red Hat build of Keycloak Authorization Services If you are using OAuth 2.0 with Red Hat build of Keycloak for token-based authentication, you can also use Red Hat build of Keycloak to configure authorization rules to constrain client access to Kafka brokers. This example explains how to use Red Hat build of Keycloak Authorization Services with keycloak authorization. Set up Red Hat build of Keycloak Authorization Services to enforce access restrictions on Kafka clients. Red Hat build of Keycloak Authorization Services use authorization scopes, policies and permissions to define and apply access control to resources. Red Hat build of Keycloak Authorization Services REST endpoints provide a list of granted permissions on resources for authenticated users. The list of grants (permissions) is fetched from the Red Hat build of Keycloak server as the first action after an authenticated session is established by the Kafka client. The list is refreshed in the background so that changes to the grants are detected. Grants are cached and enforced locally on the Kafka broker for each user session to provide fast authorization decisions. Streams for Apache Kafka provides example configuration files . These include the following example files for setting up Red Hat build of Keycloak: kafka-ephemeral-oauth-single-keycloak-authz.yaml An example Kafka custom resource configured for OAuth 2.0 token-based authorization using Red Hat build of Keycloak. You can use the custom resource to deploy a Kafka cluster that uses keycloak authorization and token-based oauth authentication. kafka-authz-realm.json An example Red Hat build of Keycloak realm configured with sample groups, users, roles and clients. You can import the realm into a Red Hat build of Keycloak instance to set up fine-grained permissions to access Kafka. If you want to try the example with Red Hat build of Keycloak, use these files to perform the tasks outlined in this section in the order shown. Setting up permissions in Red Hat build of Keycloak Deploying a Kafka cluster with Red Hat build of Keycloak authorization Preparing TLS connectivity for a CLI Kafka client session Checking authorized access to Kafka using a CLI Kafka client session Authentication When you configure token-based oauth authentication, you specify a jwksEndpointUri as the URI for local JWT validation. When you configure keycloak authorization, you specify a tokenEndpointUri as the URI of the Red Hat build of Keycloak token endpoint. The hostname for both URIs must be the same. Targeted permissions with group or role policies In Red Hat build of Keycloak, confidential clients with service accounts enabled can authenticate to the server in their own name using a client ID and a secret. This is convenient for microservices that typically act in their own name, and not as agents of a particular user (like a web site). Service accounts can have roles assigned like regular users. They cannot, however, have groups assigned. As a consequence, if you want to target permissions to microservices using service accounts, you cannot use group policies, and should instead use role policies. Conversely, if you want to limit certain permissions only to regular user accounts where authentication with a username and password is required, you can achieve that as a side effect of using the group policies rather than the role policies. This is what is used in this example for permissions that start with ClusterManager . Performing cluster management is usually done interactively using CLI tools. It makes sense to require the user to log in before using the resulting access token to authenticate to the Kafka broker. In this case, the access token represents the specific user, rather than the client application. 17.4.4.1. Setting up permissions in Red Hat build of Keycloak Set up Red Hat build of Keycloak, then connect to its Admin Console and add the preconfigured realm. Use the example kafka-authz-realm.json file to import the realm. You can check the authorization rules defined for the realm in the Admin Console. The rules grant access to the resources on the Kafka cluster configured to use the example Red Hat build of Keycloak realm. Prerequisites A running OpenShift cluster. The Streams for Apache Kafka examples/security/keycloak-authorization/kafka-authz-realm.json file that contains the preconfigured realm. Procedure Install the Red Hat build of Keycloak server using the Red Hat build of Keycloak Operator as described in Getting Started in the Red Hat build of Keycloak documentation. Wait until the Red Hat build of Keycloak instance is running. Get the external hostname to be able to access the Admin Console. NS=sso oc get ingress keycloak -n USDNS In this example, we assume the Red Hat build of Keycloak server is running in the sso namespace. Get the password for the admin user. oc get -n USDNS pod keycloak-0 -o yaml | less The password is stored as a secret, so get the configuration YAML file for the Red Hat build of Keycloak instance to identify the name of the secret ( secretKeyRef.name ). Use the name of the secret to obtain the clear text password. SECRET_NAME=credential-keycloak oc get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D In this example, we assume the name of the secret is credential-keycloak . Log in to the Admin Console with the username admin and the password you obtained. Use https:// HOSTNAME to access the Kubernetes Ingress . You can now upload the example realm to Red Hat build of Keycloak using the Admin Console. Click Add Realm to import the example realm. Add the examples/security/keycloak-authorization/kafka-authz-realm.json file, and then click Create . You now have kafka-authz as your current realm in the Admin Console. The default view displays the Master realm. In the Red Hat build of Keycloak Admin Console, go to Clients > kafka > Authorization > Settings and check that Decision Strategy is set to Affirmative . An affirmative policy means that at least one policy must be satisfied for a client to access the Kafka cluster. In the Red Hat build of Keycloak Admin Console, go to Groups , Users , Roles and Clients to view the realm configuration. Groups Groups organize users and set permissions. Groups can be linked to an LDAP identity provider and used to compartmentalize users, such as by region or department. Users can be added to groups through a custom LDAP interface to manage permissions for Kafka resources. For more information on using KeyCloak's LDAP provider, see the guide to Keycloak Server Administration . Users Users define individual users. In this example, alice (in the ClusterManager group) and bob (in ClusterManager-my-cluster ) are created. Users can also be stored in an LDAP identity provider. Roles Roles assign specific permissions to users or clients. Roles function like tags to give users certain rights. While roles cannot be stored in LDAP, you can add Red Hat build of Keycloak roles to groups to combine both roles and permissions. Clients Clients define configurations for Kafka interactions. The kafka client handles OAuth 2.0 token validation for brokers and contains authorization policies (which require authorization to be enabled). The kafka-cli client is used by command line tools to obtain access or refresh tokens. team-a-client and team-b-client represent services with partial access to specific Kafka topics. In the Red Hat build of Keycloak Admin Console, go to Authorization > Permissions to see the granted permissions that use the resources and policies defined for the realm. For example, the kafka client has the following permissions: Dev Team A The Dev Team A realm role can write to topics that start with x_ on any cluster. This combines a resource called Topic:x_* , Describe and Write scopes, and the Dev Team A policy. The Dev Team A policy matches all users that have a realm role called Dev Team A . Dev Team B The Dev Team B realm role can read from topics that start with x_ on any cluster. This combines Topic:x_* , Group:x_* resources, Describe and Read scopes, and the Dev Team B policy. The Dev Team B policy matches all users that have a realm role called Dev Team B . Matching users and clients have the ability to read from topics, and update the consumed offsets for topics and consumer groups that have names starting with x_ . 17.4.4.2. Deploying a Kafka cluster with Red Hat build of Keycloak authorization Deploy a Kafka cluster configured to connect to the Red Hat build of Keycloak server. Use the example kafka-ephemeral-oauth-single-keycloak-authz.yaml file to deploy the Kafka cluster as a Kafka custom resource. The example deploys a single-node Kafka cluster with keycloak authorization and oauth authentication. Prerequisites The Red Hat build of Keycloak authorization server is deployed to your OpenShift cluster and loaded with the example realm. The Cluster Operator is deployed to your OpenShift cluster. The Streams for Apache Kafka examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml custom resource. Procedure Use the hostname of the Red Hat build of Keycloak instance you deployed to prepare a truststore certificate for Kafka brokers to communicate with the Red Hat build of Keycloak server. SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem The certificate is required as Kubernetes Ingress is used to make a secure (HTTPS) connection. Usually there is not one single certificate, but a certificate chain. You only have to provide the top-most issuer CA, which is listed last in the /tmp/sso.pem file. You can extract it manually or using the following commands: Example command to extract the top CA certificate in a certificate chain split -p "-----BEGIN CERTIFICATE-----" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt Note A trusted CA certificate is normally obtained from a trusted source, and not by using the openssl command. Deploy the certificate to OpenShift as a secret. oc create secret generic oauth-server-cert --from-file=/tmp/sso-ca.crt -n USDNS Set the hostname as an environment variable SSO_HOST= SSO-HOSTNAME Create and deploy the example Kafka cluster. cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\USD{SSO_HOST}'"#USDSSO_HOST#" | oc create -n USDNS -f - 17.4.4.3. Preparing TLS connectivity for a CLI Kafka client session Create a new pod for an interactive CLI session. Set up a truststore with a Red Hat build of Keycloak certificate for TLS connectivity. The truststore is to connect to Red Hat build of Keycloak and the Kafka broker. Prerequisites The Red Hat build of Keycloak authorization server is deployed to your OpenShift cluster and loaded with the example realm. In the Red Hat build of Keycloak Admin Console, check the roles assigned to the clients are displayed in Clients > Service Account Roles . The Kafka cluster configured to connect with Red Hat build of Keycloak is deployed to your OpenShift cluster. Procedure Run a new interactive pod container using the Kafka image to connect to a running Kafka broker. NS=sso oc run -ti --restart=Never --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 kafka-cli -n USDNS -- /bin/sh Note If oc times out waiting on the image download, subsequent attempts may result in an AlreadyExists error. Attach to the pod container. oc attach -ti kafka-cli -n USDNS Use the hostname of the Red Hat build of Keycloak instance to prepare a certificate for client connection using TLS. SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem Usually there is not one single certificate, but a certificate chain. You only have to provide the top-most issuer CA, which is listed last in the /tmp/sso.pem file. You can extract it manually or using the following command: Example command to extract the top CA certificate in a certificate chain split -p "-----BEGIN CERTIFICATE-----" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt Note A trusted CA certificate is normally obtained from a trusted source, and not by using the openssl command. Create a truststore for TLS connection to the Kafka brokers. keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso-ca.crt -noprompt Use the Kafka bootstrap address as the hostname of the Kafka broker and the tls listener port (9093) to prepare a certificate for the Kafka broker. KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo "Q" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.pem The obtained .pem file is usually not one single certificate, but a certificate chain. You only have to provide the top-most issuer CA, which is listed last in the /tmp/my-cluster-kafka.pem file. You can extract it manually or using the following command: Example command to extract the top CA certificate in a certificate chain split -p "-----BEGIN CERTIFICATE-----" /tmp/my-cluster-kafka.pem kafka- for f in USD(ls kafka-*); do mv USDf USDf.pem; done cp USD(ls kafka-* | sort -r | head -n 1) my-cluster-kafka-ca.crt Note A trusted CA certificate is normally obtained from a trusted source, and not by using the openssl command. For this example we assume the client is running in a pod in the same namespace where the Kafka cluster was deployed. If the client is accessing the Kafka cluster from outside the OpenShift cluster, you would have to first determine the bootstrap address. In that case you can also get the cluster certificate directly from the OpenShift secret, and there is no need for openssl . For more information, see Chapter 15, Setting up client access to a Kafka cluster . Add the certificate for the Kafka broker to the truststore. keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka-ca.crt -noprompt Keep the session open to check authorized access. 17.4.4.4. Checking authorized access to Kafka using a CLI Kafka client session Check the authorization rules applied through the Red Hat build of Keycloak realm using an interactive CLI session. Apply the checks using Kafka's example producer and consumer clients to create topics with user and service accounts that have different levels of access. Use the team-a-client and team-b-client clients to check the authorization rules. Use the alice admin user to perform additional administrative tasks on Kafka. The Kafka image used in this example contains Kafka producer and consumer binaries. Prerequisites ZooKeeper and Kafka are running in the OpenShift cluster to be able to send and receive messages. The interactive CLI Kafka client session is started. Apache Kafka download . Setting up client and admin user configuration Prepare a Kafka configuration file with authentication properties for the team-a-client client. SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.client.id="team-a-client" \ oauth.client.secret="team-a-client-secret" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF The SASL OAUTHBEARER mechanism is used. This mechanism requires a client ID and client secret, which means the client first connects to the Red Hat build of Keycloak server to obtain an access token. The client then connects to the Kafka broker and uses the access token to authenticate. Prepare a Kafka configuration file with authentication properties for the team-b-client client. cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.client.id="team-b-client" \ oauth.client.secret="team-b-client-secret" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF Authenticate admin user alice by using curl and performing a password grant authentication to obtain a refresh token. USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST "https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token" -H 'Content-Type: application/x-www-form-urlencoded' -d "grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F "refresh_token\":\"" '{printf USD2}' | awk -F "\"" '{printf USD1}') The refresh token is an offline token that is long-lived and does not expire. Prepare a Kafka configuration file with authentication properties for the admin user alice . cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.refresh.token="USDREFRESH_TOKEN" \ oauth.client.id="kafka-cli" \ oauth.ssl.truststore.location="/tmp/truststore.p12" \ oauth.ssl.truststore.password="USDSTOREPASS" \ oauth.ssl.truststore.type="PKCS12" \ oauth.token.endpoint.uri="https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF The kafka-cli public client is used for the oauth.client.id in the sasl.jaas.config . Since it's a public client it does not require a secret. The client authenticates with the refresh token that was authenticated in the step. The refresh token requests an access token behind the scenes, which is then sent to the Kafka broker for authentication. Producing messages with authorized access Use the team-a-client configuration to check that you can produce messages to topics that start with a_ or x_ . Write to topic my-topic . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic my-topic \ --producer.config=/tmp/team-a-client.properties First message This request returns a Not authorized to access topics: [my-topic] error. team-a-client has a Dev Team A role that gives it permission to perform any supported actions on topics that start with a_ , but can only write to topics that start with x_ . The topic named my-topic matches neither of those rules. Write to topic a_messages . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --producer.config /tmp/team-a-client.properties First message Second message Messages are produced to Kafka successfully. Press CTRL+C to exit the CLI application. Check the Kafka container log for a debug log of Authorization GRANTED for the request. oc logs my-cluster-kafka-0 -f -n USDNS Consuming messages with authorized access Use the team-a-client configuration to consume messages from topic a_messages . Fetch messages from topic a_messages . bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties The request returns an error because the Dev Team A role for team-a-client only has access to consumer groups that have names starting with a_ . Update the team-a-client properties to specify the custom consumer group it is permitted to use. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1 The consumer receives all the messages from the a_messages topic. Administering Kafka with authorized access The team-a-client is an account without any cluster-level access, but it can be used with some administrative operations. List topics. bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list The a_messages topic is returned. List consumer groups. bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list The a_consumer_group_1 consumer group is returned. Fetch details on the cluster configuration. bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties \ --entity-type brokers --describe --entity-default The request returns an error because the operation requires cluster level permissions that team-a-client does not have. Using clients with different permissions Use the team-b-client configuration to produce messages to topics that start with b_ . Write to topic a_messages . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages \ --producer.config /tmp/team-b-client.properties Message 1 This request returns a Not authorized to access topics: [a_messages] error. Write to topic b_messages . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic b_messages \ --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3 Messages are produced to Kafka successfully. Write to topic x_messages . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-b-client.properties Message 1 A Not authorized to access topics: [x_messages] error is returned, The team-b-client can only read from topic x_messages . Write to topic x_messages using team-a-client . bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-a-client.properties Message 1 This request returns a Not authorized to access topics: [x_messages] error. The team-a-client can write to the x_messages topic, but it does not have a permission to create a topic if it does not yet exist. Before team-a-client can write to the x_messages topic, an admin power user must create it with the correct configuration, such as the number of partitions and replicas. Managing Kafka with an authorized admin user Use admin user alice to manage Kafka. alice has full access to manage everything on any Kafka cluster. Create the x_messages topic as alice . bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties \ --topic x_messages --create --replication-factor 1 --partitions 1 The topic is created successfully. List all topics as alice . bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list Admin user alice can list all the topics, whereas team-a-client and team-b-client can only list the topics they have access to. The Dev Team A and Dev Team B roles both have Describe permission on topics that start with x_ , but they cannot see the other team's topics because they do not have Describe permissions on them. Use the team-a-client to produce messages to the x_messages topic: bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3 As alice created the x_messages topic, messages are produced to Kafka successfully. Use the team-b-client to produce messages to the x_messages topic. bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --producer.config /tmp/team-b-client.properties Message 4 Message 5 This request returns a Not authorized to access topics: [x_messages] error. Use the team-b-client to consume messages from the x_messages topic: bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b The consumer receives all the messages from the x_messages topic. Use the team-a-client to consume messages from the x_messages topic. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a This request returns a Not authorized to access topics: [x_messages] error. Use the team-a-client to consume messages from a consumer group that begins with a_ . bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a This request returns a Not authorized to access topics: [x_messages] error. Dev Team A has no Read access on topics that start with a x_ . Use alice to produce messages to the x_messages topic. bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages \ --from-beginning --consumer.config /tmp/alice.properties Messages are produced to Kafka successfully. alice can read from or write to any topic. Use alice to read the cluster configuration. bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties \ --entity-type brokers --describe --entity-default The cluster configuration for this example is empty. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth enablePlain: true enableOauthBearer: false #",
"# - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth 1 validIssuerUri: https://<auth_server_address>/<issuer-context> 2 jwksEndpointUri: https://<auth_server_address>/<path_to_jwks_endpoint> 3 userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert pattern: \"*.crt\" disableTlsHostnameVerification: true 7 jwksExpirySeconds: 360 8 jwksRefreshSeconds: 300 9 jwksMinRefreshPauseSeconds: 1 10",
"# - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://kubernetes.default.svc.cluster.local 1 jwksEndpointUri: https://kubernetes.default.svc.cluster.local/openid/v1/jwks 2 serverBearerTokenLocation: /var/run/secrets/kubernetes.io/serviceaccount/token 3 checkAccessTokenType: false 4 includeAcceptHeader: false 5 tlsTrustedCertificates: 6 - secretName: oauth-server-cert pattern: \"*.crt\" maxSecondsWithoutReauthentication: 3600 customClaimCheck: \"@.['kubernetes.io'] && @.['kubernetes.io'].['namespace'] in ['myproject']\" 7",
"get cm kube-root-ca.crt -o jsonpath=\"{['data']['ca\\.crt']}\" > /tmp/ca.crt create secret generic oauth-server-cert --from-file=ca.crt=/tmp/ca.crt",
"- name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth validIssuerUri: https://<auth_server_address>/<issuer-context> introspectionEndpointUri: https://<auth_server_address>/<path_to_introspection_endpoint> 1 clientId: kafka-broker 2 clientSecret: 3 secretName: my-cluster-oauth key: clientSecret userNameClaim: preferred_username 4 maxSecondsWithoutReauthentication: 3600 5 tlsTrustedCertificates: - secretName: oauth-server-cert pattern: \"*.crt\"",
"authentication: type: oauth # checkIssuer: false 1 checkAudience: true 2 usernamePrefix: user- 3 fallbackUserNameClaim: client_id 4 fallbackUserNamePrefix: client-account- 5 serverBearerTokenLocation: path/to/access/token 6 validTokenType: bearer 7 userInfoEndpointUri: https://<auth_server_address>/<path_to_userinfo_endpoint> 8 enableOauthBearer: false 9 enablePlain: true 10 tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> 11 customClaimCheck: \"@.custom == 'custom-value'\" 12 clientAudience: audience 13 clientScope: scope 14 connectTimeoutSeconds: 60 15 readTimeoutSeconds: 60 16 httpRetries: 2 17 httpRetryPauseMs: 300 18 groupsClaim: \"USD.groups\" 19 groupsClaimDelimiter: \",\" 20 includeAcceptHeader: false 21",
"security.protocol=SASL_SSL 1 sasl.mechanism=OAUTHBEARER 2 ssl.truststore.location=/tmp/truststore.p12 3 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" \\ 4 oauth.client.id=\"<client_id>\" \\ 5 oauth.client.secret=\"<client_secret>\" \\ 6 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" \\ 7 oauth.ssl.truststore.password=\"USDSTOREPASS\" \\ 8 oauth.ssl.truststore.type=\"PKCS12\" \\ 9 oauth.scope=\"<scope>\" \\ 10 oauth.audience=\"<audience>\" ; 11 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" oauth.client.assertion.location=\"<path_to_client_assertion_token_file>\" \\ 1 oauth.client.assertion.type=\"urn:ietf:params:oauth:client-assertion-type:jwt-bearer\" \\ 2 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.password.grant.username=\"<username>\" \\ 3 oauth.password.grant.password=\"<password>\" \\ 4 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.scope=\"<scope>\" oauth.audience=\"<audience>\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.access.token=\"<access_token>\" ; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.access.token.location=\"/var/run/secrets/kubernetes.io/serviceaccount/token\"; 1 sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"security.protocol=SASL_SSL sasl.mechanism=OAUTHBEARER ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.token.endpoint.uri=\"<token_endpoint_url>\" oauth.client.id=\"<client_id>\" \\ 1 oauth.client.secret=\"<client_secret>\" \\ 2 oauth.refresh.token=\"<refresh_token>\" \\ 3 oauth.ssl.truststore.location=\"/tmp/oauth-truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler",
"oauth.sasl.extension.key1=\"value1\" oauth.sasl.extension.key2=\"value2\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: tls port: 9093 type: internal tls: true authentication: type: oauth - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: oauth #",
"logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w",
"<dependency> <groupId>io.strimzi</groupId> <artifactId>kafka-oauth-client</artifactId> <version>0.15.0.redhat-00012</version> </dependency>",
"Properties props = new Properties(); try (FileReader reader = new FileReader(\"client.properties\", StandardCharsets.UTF_8)) { props.load(reader); }",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Secret metadata: name: my-bridge-oauth type: Opaque data: clientSecret: MGQ1OTRmMzYtZTllZS00MDY2LWI5OGEtMTM5MzM2NjdlZjQw 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth 1 tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> 2 clientId: kafka-bridge clientSecret: secretName: my-bridge-oauth key: clientSecret tlsTrustedCertificates: 3 - secretName: oauth-server-cert pattern: \"*.crt\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth tokenEndpointUri: https://<auth_server_address>/<path_to_token_endpoint> clientId: kafka-bridge clientAssertionLocation: /var/run/secrets/sso/assertion 1 tlsTrustedCertificates: - secretName: oauth-server-cert pattern: \"*.crt\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # authentication: type: oauth accessTokenLocation: /var/run/secrets/kubernetes.io/serviceaccount/token 1",
"spec: # authentication: # disableTlsHostnameVerification: true 1 accessTokenIsJwt: false 2 scope: any 3 audience: kafka 4 connectTimeoutSeconds: 60 5 readTimeoutSeconds: 60 6 httpRetries: 2 7 httpRetryPauseMs: 300 8 includeAcceptHeader: false 9",
"logs -f USD{POD_NAME} -c USD{CONTAINER_NAME} get pod -w",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # authorization: type: keycloak 1 tokenEndpointUri: < https://<auth-server-address>/realms/external/protocol/openid-connect/token > 2 clientId: kafka 3 delegateToKafkaAcls: false 4 disableTlsHostnameVerification: false 5 superUsers: 6 - CN=user-1 - user-2 - CN=user-3 tlsTrustedCertificates: 7 - secretName: oauth-server-cert pattern: \"*.crt\" grantsRefreshPeriodSeconds: 60 8 grantsRefreshPoolSize: 5 9 grantsMaxIdleSeconds: 300 10 grantsGcPeriodSeconds: 300 11 grantsAlwaysLatest: false 12 connectTimeoutSeconds: 60 13 readTimeoutSeconds: 60 14 httpRetries: 2 15 enableMetrics: false 16 includeAcceptHeader: false 17 #",
"logs -f USD{POD_NAME} -c kafka get pod -w",
"Topic:my-topic Topic:orders-* Group:orders-* Cluster:*",
"kafka-cluster:my-cluster,Topic:* kafka-cluster:*,Group:b_*",
"bin/kafka-topics.sh --create --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --describe --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties",
"Topic:my-topic Group:my-group-*",
"bin/kafka-console-consumer.sh --topic my-topic --group my-group-1 --from-beginning --bootstrap-server my-cluster-kafka-bootstrap:9092 --consumer.config /tmp/config.properties",
"Topic:my-topic Cluster:kafka-cluster",
"bin/kafka-console-producer.sh --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --producer.config=/tmp/config.properties --producer-property enable.idempotence=true --request-required-acks -1",
"bin/kafka-consumer-groups.sh --list --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-consumer-groups.sh --describe --group my-group-1 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --alter --topic my-topic --partitions 2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-configs.sh --entity-type brokers --entity-name 0 --describe --all --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-configs --entity-type brokers --entity-name 0 --alter --add-config log.cleaner.threads=2 --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-topics.sh --delete --topic my-topic --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config=/tmp/config.properties",
"bin/kafka-leader-election.sh --topic my-topic --partition 0 --election-type PREFERRED / --bootstrap-server my-cluster-kafka-bootstrap:9092 --admin.config /tmp/config.properties",
"bin/kafka-reassign-partitions.sh --topics-to-move-json-file /tmp/topics-to-move.json --broker-list \"0,1\" --generate --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties > /tmp/partition-reassignment.json",
"bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --execute --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties",
"bin/kafka-reassign-partitions.sh --reassignment-json-file /tmp/partition-reassignment.json --verify --bootstrap-server my-cluster-kafka-bootstrap:9092 --command-config /tmp/config.properties",
"NS=sso get ingress keycloak -n USDNS",
"get -n USDNS pod keycloak-0 -o yaml | less",
"SECRET_NAME=credential-keycloak get -n USDNS secret USDSECRET_NAME -o yaml | grep PASSWORD | awk '{print USD2}' | base64 -D",
"Dev Team A can write to topics that start with x_ on any cluster Dev Team B can read from topics that start with x_ on any cluster Dev Team B can update consumer group offsets that start with x_ on any cluster ClusterManager of my-cluster Group has full access to cluster config on my-cluster ClusterManager of my-cluster Group has full access to consumer groups on my-cluster ClusterManager of my-cluster Group has full access to topics on my-cluster",
"SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt",
"create secret generic oauth-server-cert --from-file=/tmp/sso-ca.crt -n USDNS",
"SSO_HOST= SSO-HOSTNAME",
"cat examples/security/keycloak-authorization/kafka-ephemeral-oauth-single-keycloak-authz.yaml | sed -E 's#\\USD{SSO_HOST}'\"#USDSSO_HOST#\" | oc create -n USDNS -f -",
"NS=sso run -ti --restart=Never --image=registry.redhat.io/amq-streams/kafka-39-rhel9:2.9.0 kafka-cli -n USDNS -- /bin/sh",
"attach -ti kafka-cli -n USDNS",
"SSO_HOST= SSO-HOSTNAME SSO_HOST_PORT=USDSSO_HOST:443 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDSSO_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/sso.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" sso.pem sso- for f in USD(ls sso-*); do mv USDf USDf.pem; done cp USD(ls sso-* | sort -r | head -n 1) sso-ca.crt",
"keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias sso -storepass USDSTOREPASS -import -file /tmp/sso-ca.crt -noprompt",
"KAFKA_HOST_PORT=my-cluster-kafka-bootstrap:9093 STOREPASS=storepass echo \"Q\" | openssl s_client -showcerts -connect USDKAFKA_HOST_PORT 2>/dev/null | awk ' /BEGIN CERTIFICATE/,/END CERTIFICATE/ { print USD0 } ' > /tmp/my-cluster-kafka.pem",
"split -p \"-----BEGIN CERTIFICATE-----\" /tmp/my-cluster-kafka.pem kafka- for f in USD(ls kafka-*); do mv USDf USDf.pem; done cp USD(ls kafka-* | sort -r | head -n 1) my-cluster-kafka-ca.crt",
"keytool -keystore /tmp/truststore.p12 -storetype pkcs12 -alias my-cluster-kafka -storepass USDSTOREPASS -import -file /tmp/my-cluster-kafka-ca.crt -noprompt",
"SSO_HOST= SSO-HOSTNAME cat > /tmp/team-a-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-a-client\" oauth.client.secret=\"team-a-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"cat > /tmp/team-b-client.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.client.id=\"team-b-client\" oauth.client.secret=\"team-b-client-secret\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"USERNAME=alice PASSWORD=alice-password GRANT_RESPONSE=USD(curl -X POST \"https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token\" -H 'Content-Type: application/x-www-form-urlencoded' -d \"grant_type=password&username=USDUSERNAME&password=USDPASSWORD&client_id=kafka-cli&scope=offline_access\" -s -k) REFRESH_TOKEN=USD(echo USDGRANT_RESPONSE | awk -F \"refresh_token\\\":\\\"\" '{printf USD2}' | awk -F \"\\\"\" '{printf USD1}')",
"cat > /tmp/alice.properties << EOF security.protocol=SASL_SSL ssl.truststore.location=/tmp/truststore.p12 ssl.truststore.password=USDSTOREPASS ssl.truststore.type=PKCS12 sasl.mechanism=OAUTHBEARER sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.refresh.token=\"USDREFRESH_TOKEN\" oauth.client.id=\"kafka-cli\" oauth.ssl.truststore.location=\"/tmp/truststore.p12\" oauth.ssl.truststore.password=\"USDSTOREPASS\" oauth.ssl.truststore.type=\"PKCS12\" oauth.token.endpoint.uri=\"https://USDSSO_HOST/realms/kafka-authz/protocol/openid-connect/token\" ; sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler EOF",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic my-topic --producer.config=/tmp/team-a-client.properties First message",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-a-client.properties First message Second message",
"logs my-cluster-kafka-0 -f -n USDNS",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list",
"bin/kafka-consumer-groups.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list",
"bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --entity-type brokers --describe --entity-default",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic a_messages --producer.config /tmp/team-b-client.properties Message 1",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic b_messages --producer.config /tmp/team-b-client.properties Message 1 Message 2 Message 3",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 1",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --topic x_messages --create --replication-factor 1 --partitions 1",
"bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-a-client.properties --list bin/kafka-topics.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/team-b-client.properties --list",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-a-client.properties Message 1 Message 2 Message 3",
"bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --producer.config /tmp/team-b-client.properties Message 4 Message 5",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-b-client.properties --group x_consumer_group_b",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group x_consumer_group_a",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/team-a-client.properties --group a_consumer_group_a",
"bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --topic x_messages --from-beginning --consumer.config /tmp/alice.properties",
"bin/kafka-configs.sh --bootstrap-server my-cluster-kafka-bootstrap:9093 --command-config /tmp/alice.properties --entity-type brokers --describe --entity-default"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/assembly-oauth-security-str |
Chapter 3. Sink connectors | Chapter 3. Sink connectors Debezium provides sink connectors that can consume events from sources such as Apache Kafka topics. A sink connector standardizes the format of the data, and then persists the event data to a configured sink repository. Other systems, applications, or users can then access the events from the data sink. Because the sink connector applies a consistent structure to the event data that it consumes, downstream applications that read from the data sink can more easily interpret and process that data. Currently, Debezium provides the following sink connectors: JDBC sink connector 3.1. Debezium connector for JDBC The Debezium JDBC connector is a Kafka Connect sink connector implementation that can consume events from multiple source topics, and then write those events to a relational database by using a JDBC driver. This connector supports a wide variety of database dialects, including Db2, MySQL, Oracle, PostgreSQL, and SQL Server. 3.1.1. How the Debezium JDBC connector works The Debezium JDBC connector is a Kafka Connect sink connector, and therefore requires the Kafka Connect runtime. The connector periodically polls the Kafka topics that it subscribes to, consumes events from those topics, and then writes the events to the configured relational database. The connector supports idempotent write operations by using upsert semantics and basic schema evolution. The Debezium JDBC connector provides the following features: Section 3.1.1.1, "Description of how the Debezium JDBC connector consumes complex change events" Section 3.1.1.2, "Description of Debezium JDBC connector at-least-once delivery" Section 3.1.1.3, "Description of Debezium JDBC use of multiple tasks" Section 3.1.1.4, "Description of Debezium JDBC connector data and column type mappings" Section 3.1.1.5, "Description of how the Debezium JDBC connector handles primary keys in source events" Section 3.1.1.6, "Configuring the Debezium JDBC connector to delete rows when consuming DELETE or tombstone events" Section 3.1.1.7, "Enabling the connector to perform idempotent writes" Section 3.1.1.8, "Schema evolution modes for the Debezium JDBC connector" Section 3.1.1.9, "Specifying options to define the letter case of destination table and column names" Connection Idle Timeouts 3.1.1.1. Description of how the Debezium JDBC connector consumes complex change events By default, Debezium source connectors produce complex, hierarchical change events. When Debezium connectors are used with other JDBC sink connector implementations, you might need to apply the ExtractNewRecordState single message transformation (SMT) to flatten the payload of change events, so that they can be consumed by the sink implementation. If you run the Debezium JDBC sink connector, it's not necessary to deploy the SMT, because the Debezium sink connector can consume native Debezium change events directly, without the use of a transformation. When the JDBC sink connector consumes a complex change event from a Debezium source connector, it extracts the values from the after section of the original insert or update event. When a delete event is consumed by the sink connector, no part of the event's payload is consulted. Important The Debezium JDBC sink connector is not designed to read from schema change topics. If your source connector is configured to capture schema changes, in the JDBC connector configuration, set the topics or topics.regex properties so that the connector does not consume from schema change topics. 3.1.1.2. Description of Debezium JDBC connector at-least-once delivery The Debezium JDBC sink connector guarantees that events that is consumes from Kafka topics are processed at least once. 3.1.1.3. Description of Debezium JDBC use of multiple tasks You can run the Debezium JDBC sink connector across multiple Kafka Connect tasks. To run the connector across multiple tasks, set the tasks.max configuration property to the number of tasks that you want the connector to use. The Kafka Connect runtime starts the specified number of tasks, and runs one instance of the connector per task. Multiple tasks can improve performance by reading and processing changes from multiple source topics in parallel. 3.1.1.4. Description of Debezium JDBC connector data and column type mappings To enable the Debezium JDBC sink connector to correctly map the data type from an inbound message field to an outbound message field, the connector requires information about the data type of each field that is present in the source event. The connector supports a wide range of column type mappings across different database dialects. To correctly convert the destination column type from the type metadata in an event field, the connector applies the data type mappings that are defined for the source database. You can enhance the way that the connector resolves data types for a column by setting the column.propagate.source.type or datatype.propagate.source.type options in the source connector configuration. When you enable these options, Debezium includes extra parameter metadata, which assists the JDBC sink connector in more accurately resolving the data type of destination columns. For the Debezium JDBC sink connector to process events from a Kafka topic, the Kafka topic message key, when present, must be a primitive data type or a Struct . In addition, the payload of the source message must be a Struct that has either a flattened structure with no nested struct types, or a nested struct layout that conforms to Debezium's complex, hierarchical structure. If the structure of the events in the Kafka topic do not adhere to these rules, you must implement a custom single message transformation to convert the structure of the source events into a usable format. 3.1.1.5. Description of how the Debezium JDBC connector handles primary keys in source events By default, the Debezium JDBC sink connector does not transform any of the fields in the source event into the primary key for the event. Unfortunately, the lack of a stable primary key can complicate event processing, depending on your business requirements, or when the sink connector uses upsert semantics. To define a consistent primary key, you can configure the connector to use one of the primary key modes described in the following table: Mode Description none No primary key fields are specified when creating the table. kafka The primary key consists of the following three columns: __connect_topic __connect_partition __connect_offset The values for these columns are sourced from the coordinates of the Kafka event. record_key The primary key is composed of the Kafka event's key. If the primary key is a primitive type, specify the name of the column to be used by setting the primary.key.fields property. If the primary key is a struct type, the fields in the struct are mapped as columns of the primary key. You can use the primary.key.fields property to restrict the primary key to a subset of columns. record_value The primary key is composed of the Kafka event's value. Because the value of a Kafka event is always a Struct , by default, all of the fields in the value become columns of the primary key. To use a subset of fields in the primary key, set the primary.key.fields property to specify a comma-separated list of fields in the value from which you want to derive the primary key columns. record_header The primary key is composed of the Kafka event's headers. Kafka event's headers contains could contain multiple header that each one could be Struct or primitives data types, the connectors makes a Struct of these headers. Hence, all fields in this Struct become columns of the primary key. To use a subset of fields in the primary key, set the primary.key.fields property to specify a comma-separated list of fields in the value from which you want to derive the primary key columns. Important Some database dialects might throw an exception if you set the primary.key.mode to kafka and set schema.evolution to basic . This exception occurs when a dialect maps a STRING data type mapping to a variable length string data type such as TEXT or CLOB , and the dialect does not allow primary key columns to have unbounded lengths. To avoid this problem, apply the following settings in your environment: Do not set schema.evolution to basic . Create the database table and primary key mappings in advance. Important If a column maps to a data type that isn't permitted as a primary key for your target database, an explicit list of columns will be necessary in primary.key.fields excluding such columns. Consult your specific database vendor's documentation for what data types are and are not permissible. 3.1.1.6. Configuring the Debezium JDBC connector to delete rows when consuming DELETE or tombstone events The Debezium JDBC sink connector can delete rows in the destination database when a DELETE or tombstone event is consumed. By default, the JDBC sink connector does not enable delete mode. If you want to the connector to remove rows, you must explicitly set delete.enabled=true in the connector configuration. To use this mode you must also set primary.key.fields to a value other than none . The preceding configuration is necessary, because deletes are executed based on the primary key mapping, so if a destination table has no primary key mapping, the connector is unable to delete rows. 3.1.1.7. Enabling the connector to perform idempotent writes The Debezium JDBC sink connector can perform idempotent writes, enabling it to replay the same records repeatedly and not change the final database state. To enable the connector to perform idempotent writes, you must be explicitly set the insert.mode for the connector to upsert . An upsert operation is applied as either an update or an insert , depending on whether the specified primary key already exists. If the primary key value already exists, the operation updates values in the row. If the specified primary key value doesn't exist, an insert adds a new row. Each database dialect handles idempotent writes differently, because there is no SQL standard for upsert operations. The following table shows the upsert DML syntax for the database dialects that Debezium supports: Dialect Upsert Syntax Db2 MERGE ... MySQL INSERT ... ON DUPLICATE KEY UPDATE ... Oracle MERGE ... PostgreSQL INSERT ... ON CONFLICT ... DO UPDATE SET ... SQL Server MERGE ... 3.1.1.8. Schema evolution modes for the Debezium JDBC connector You can use the following schema evolution modes with the Debezium JDBC sink connector: Mode Description none The connector does not perform any DDL schema evolution. basic The connector automatically detects fields that are in the event payload but that do not exist in the destination table. The connector alters the destination table to add the new fields. When schema.evolution is set to basic , the connector automatically creates or alters the destination database table according to the structure of the incoming event. When an event is received from a topic for the first time, and the destination table does not yet exist, the Debezium JDBC sink connector uses the event's key, or the schema structure of the record to resolve the column structure of the table. If schema evolution is enabled, the connector prepares and executes a CREATE TABLE SQL statement before it applies the DML event to the destination table. When the Debezium JDBC connector receives an event from a topic, if the schema structure of the record differs from the schema structure of the destination table, the connector uses either the event's key or its schema structure to identify which columns are new, and must be added to the database table. If schema evolution is enabled, the connector prepares and executes an ALTER TABLE SQL statement before it applies the DML event to the destination table. Because changing column data types, dropping columns, and adjusting primary keys can be considered dangerous operations, the connector is prohibited from performing these operations. The schema of each field determines whether a column is NULL or NOT NULL . The schema also defines the default values for each column. If the connector attempts to create a table with a nullability setting or a default value that don't want, you must either create the table manually, ahead of time, or adjust the schema of the associated field before the sink connector processes the event. To adjust nullability settings or default values, you can introduce a custom single message transformation that applies changes in the pipeline, or modifies the column state defined in the source database. A field's data type is resolved based on a predefined set of mappings. For more information, see Section 3.1.2, "How the Debezium JDBC connector maps data types" . Important When you introduce new fields to the event structure of tables that already exist in the destination database, you must define the new fields as optional, or the fields must have a default value specified in the database schema. If you want a field to be removed from the destination table, use one of the following options: Remove the field manually. Drop the column. Assign a default value to the field. Define the field a nullable. 3.1.1.9. Specifying options to define the letter case of destination table and column names The Debezium JDBC sink connector consumes Kafka messages by constructing either DDL (schema changes) or DML (data changes) SQL statements that are executed on the destination database. By default, the connector uses the names of the source topic and the event fields as the basis for the table and column names in the destination table. The constructed SQL does not automatically delimit identifiers with quotes to preserve the case of the original strings. As a result, by default, the text case of table or column names in the destination database depends entirely on how the database handles name strings when the case is not specified. For example, if the destination database dialect is Oracle and the event's topic is orders , the destination table will be created as ORDERS because Oracle defaults to upper-case names when the name is not quoted. Similarly, if the destination database dialect is PostgreSQL and the event's topic is ORDERS , the destination table will be created as orders because PostgreSQL defaults to lower-case names when the name is not quoted. To explicitly preserve the case of the table and field names that are present in a Kafka event, in the connector configuration, set the value of the quote.identifiers property to true . When this options is set, when an incoming event is for a topic called orders , and the destination database dialect is Oracle, the connector creates a table with the name orders , because the constructed SQL defines the name of the table as "orders" . Enabling quoting results in the same behavior when the connector creates column names. Connection Idle Timeouts The JDBC sink connector for Debezium leverages a connection pool to enhance performance. Connection pools are engineered to establish an initial set of connections, maintain a specified number of connections, and efficiently allocate connections to the application as required. However, a challenge arises when connections linger idle in the pool, potentially triggering timeouts if they remain inactive beyond the configured idle timeout threshold of the database. To mitigate the potential for idle connection threads to trigger timeouts, connection pools offer a mechanism that periodically validates the activity of each connection. This validation ensures that connections remain active, and prevents the database from flagging them as idle. In the event of a network disruption, if Debezium attempts to use a terminated connection, the connector prompts the pool to generate a new connection. By default, the Debezium JDBC sink connector does not conduct idle timeout tests. However, you can configure the connector to request the pool to perform timeout tests at a specified interval by setting the hibernate.c3p0.idle_test_period property. For example: Example timeout configuration { "hibernate.c3p0.idle_test_period": "300" } The Debezium JDBC sink connector uses the Hibernate C3P0 connection pool. You can customize the CP30 connection pool by setting properties in the hibernate.c3p0.*` configuration namespace. In the preceding example, the setting of the hibernate.c3p0.idle_test_period property configures the connection pool to perform idle timeout tests every 300 seconds. After you apply the configuration, the connection pool begins to assess unused connections every five minutes. 3.1.2. How the Debezium JDBC connector maps data types The Debezium JDBC sink connector resolves a column's data type by using a logical or primitive type-mapping system. Primitive types include values such as integers, floating points, Booleans, strings, and bytes. Typically, these types are represented with a specific Kafka Connect Schema type code only. Logical data types are more often complex types, including values such as Struct -based types that have a fixed set of field names and schema, or values that are represented with a specific encoding, such as number of days since epoch. The following examples show representative structures of primitive and logical data types: Primitive field schema Logical field schema [ "schema": { "type": "INT64", "name": "org.apache.kafka.connect.data.Date" } ] Kafka Connect is not the only source for these complex, logical types. In fact, Debezium source connectors generate change events that have fields with similar logical types to represent a variety of different data types, including but not limited to, timestamps, dates, and even JSON data. The Debezium JDBC sink connector uses these primitive and logical types to resolve a column's type to a JDBC SQL code, which represents a column's type. These JDBC SQL codes are then used by the underlying Hibernate persistence framework to resolve the column's type to a logical data type for the dialect in use. The following tables illustrate the primitive and logical mappings between Kafka Connect and JDBC SQL types, and between Debezium and JDBC SQL types. The actual final column type varies with for each database type. Table 3.1, "Mappings between Kafka Connect Primitives and Column Data Types" Table 3.2, "Mappings between Kafka Connect Logical Types and Column Data Types" Table 3.3, "Mappings between Debezium Logical Types and Column Data Types" Table 3.4, "Mappings between Debezium dialect-specific Logical Types and Column Data Types" Table 3.1. Mappings between Kafka Connect Primitives and Column Data Types Primitive Type JDBC SQL Type INT8 Types.TINYINT INT16 Types.SMALLINT INT32 Types.INTEGER INT64 Types.BIGINT FLOAT32 Types.FLOAT FLOAT64 Types.DOUBLE BOOLEAN Types.BOOLEAN STRING Types.CHAR, Types.NCHAR, Types.VARCHAR, Types.NVARCHAR BYTES Types.VARBINARY Table 3.2. Mappings between Kafka Connect Logical Types and Column Data Types Logical Type JDBC SQL Type org.apache.kafka.connect.data.Decimal Types.DECIMAL org.apache.kafka.connect.data.Date Types.DATE org.apache.kafka.connect.data.Time Types.TIMESTAMP org.apache.kafka.connect.data.Timestamp Types.TIMESTAMP Table 3.3. Mappings between Debezium Logical Types and Column Data Types Logical Type JDBC SQL Type io.debezium.time.Date Types.DATE io.debezium.time.Time Types.TIMESTAMP io.debezium.time.MicroTime Types.TIMESTAMP io.debezium.time.NanoTime Types.TIMESTAMP io.debezium.time.ZonedTime Types.TIME_WITH_TIMEZONE io.debezium.time.Timestamp Types.TIMESTAMP io.debezium.time.MicroTimestamp Types.TIMESTAMP io.debezium.time.NanoTimestamp Types.TIMESTAMP io.debezium.time.ZonedTimestamp Types.TIMESTAMP_WITH_TIMEZONE io.debezium.data.VariableScaleDecimal Types.DOUBLE Important If the database does not support time or timestamps with time zones, the mapping resolves to its equivalent without timezones. Table 3.4. Mappings between Debezium dialect-specific Logical Types and Column Data Types Logical Type MySQL SQL Type PostgreSQL SQL Type SQL Server SQL Type io.debezium.data.Bits bit(n) bit(n) or bit varying varbinary(n) io.debezium.data.Enum enum Types.VARCHAR n/a io.debezium.data.Json json json n/a io.debezium.data.EnumSet set n/a n/a io.debezium.time.Year year(n) n/a n/a io.debezium.time.MicroDuration n/a interval n/a io.debezium.data.Ltree n/a ltree n/a io.debezium.data.Uuid n/a uuid n/a io.debezium.data.Xml n/a xml xml In addition to the primitive and logical mappings above, if the source of the change events is a Debezium source connector, the resolution of the column type, along with its length, precision, and scale, can be further influenced by enabling column or data type propagation. To enforce propagation, one of the following properties must be set in the source connector configuration: column.propagate.source.type datatype.propagate.source.type The Debezium JDBC sink connector applies the values with the higher precedence. For example, let's say the following field schema is included in a change event: Debezium change event field schema with column or data type propagation enabled { "schema": { "type": "INT8", "parameters": { "__debezium.source.column.type": "TINYINT", "__debezium.source.column.length": "1" } } } In the preceding example, if no schema parameters are set, the Debezium JDBC sink connector maps this field to a column type of Types.SMALLINT . Types.SMALLINT can have different logical database types, depending on the database dialect. For MySQL, the column type in the example converts to a TINYINT column type with no specified length. If column or data type propagation is enabled for the source connector, the Debezium JDBC sink connector uses the mapping information to refine the data type mapping process and create a column with the type TINYINT(1) . Note Typically, the effect of using column or data type propagation is much greater when the same type of database is used for both the source and sink database. 3.1.3. Deployment of Debezium JDBC connectors To deploy a Debezium JDBC connector, you install the Debezium JDBC connector archive, configure the connector, and start the connector by adding its configuration to Kafka Connect. Prerequisites Apache ZooKeeper , Apache Kafka , and Kafka Connect are installed. A destination database is installed and configured to accept JDBC connections. Procedure Download the Debezium JDBC connector plug-in archive . Extract the files into your Kafka Connect environment. Optionally download the JDBC driver from Maven Central and extract the downloaded driver file to the directory that contains the JDBC sink connector JAR file. Note Drivers for Oracle and Db2 are not included with the JDBC sink connector. You must download the drivers and install them manually. Add the driver JAR files to the path where the JDBC sink connector has been installed. Make sure that the path where you install the JDBC sink connector is part of the Kafka Connect plugin.path . Restart the Kafka Connect process to pick up the new JAR files. 3.1.3.1. Debezium JDBC connector configuration Typically, you register a Debezium JDBC connector by submitting a JSON request that specifies the configuration properties for the connector. The following example shows a JSON request for registering an instance of the Debezium JDBC sink connector that consumes events from a topic called orders with the most common configuration settings: Example: Debezium JDBC connector configuration { "name": "jdbc-connector", 1 "config": { "connector.class": "io.debezium.connector.jdbc.JdbcSinkConnector", 2 "tasks.max": "1", 3 "connection.url": "jdbc:postgresql://localhost/db", 4 "connection.username": "pguser", 5 "connection.password": "pgpassword", 6 "insert.mode": "upsert", 7 "delete.enabled": "true", 8 "primary.key.mode": "record_key", 9 "schema.evolution": "basic", 10 "database.time_zone": "UTC", 11 "topics": "orders" 12 } } Descriptions of JDBC connector configuration settings Item Description 1 The name that is assigned to the connector when you register it with Kafka Connect service. 2 The name of the JDBC sink connector class. 3 The maximum number of tasks to create for this connector. 4 The JDBC URL that the connector uses to connect to the sink database that it writes to. 5 The name of the database user that is used for authentication. 6 The password of the database user used for authentication. 7 The insert.mode that the connector uses. 8 Enables the deletion of records in the database. For more information, see the delete.enabled configuration property. 9 Specifies the method used to resolve primary key columns. For more information, see the primary.key.mode configuration property. 10 Enables the connector to evolve the destination database's schema. For more information, see the schema.evolution configuration property. 11 Specifies the timezone used when writing temporal field types. 12 List of topics to consume, separated by commas. For a complete list of configuration properties that you can set for the Debezium JDBC connector, see JDBC connector properties . You can send this configuration with a POST command to a running Kafka Connect service. The service records the configuration and starts a sink connector task(s) that performs the following operations: Connects to the database. Consumes events from subscribed Kafka topics. Writes the events to the configured database. 3.1.4. Descriptions of Debezium JDBC connector configuration properties The Debezium JDBC sink connector has several configuration properties that you can use to achieve the connector behavior that meets your needs. Many properties have default values. Information about the properties is organized as follows: JCBC connector generic properties JDBC connector connection properties JDBC connector runtime properties JDBC connector extendable properties JDBC connector hibernate.* passthrough properties Table 3.5. JDBC connector generic properties Property Default Description name No default Unique name for the connector. A failure results if you attempt to reuse this name when registering a connector. This property is required by all Kafka Connect connectors. connector.class No default The name of the Java class for the connector. For the Debezium JDBC connector, specify the value io.debezium.connector.jdbc.JdbcSinkConnector . tasks.max 1 Maximum number of tasks to use for this connector. topics No default List of topics to consume, separated by commas. Do not use this property in combination with the topics.regex property. topics.regex No default A regular expression that specifies the topics to consume. Internally, the regular expression is compiled to a java.util.regex.Pattern . Do not use this property in combination with the topics property. Table 3.6. JDBC connector connection properties Property Default Description connection.provider org.hibernate.c3p0.internal.C3P0ConnectionProvider The connection provider implementation to use. connection.url No default The JDBC connection URL used to connect to the database. connection.username No default The name of the database user account that the connector uses to connect to the database. connection.password No default The password that the connector uses to connect to the database. connection.pool.min_size 5 Specifies the minimum number of connections in the pool. connection.pool.max_size 32 Specifies the maximum number of concurrent connections that the pool maintains. connection.pool.acquire_increment 32 Specifies the number of connections that the connector attempts to acquire if the connection pool exceeds its maximum size. connection.pool.timeout 1800 Specifies the number of seconds that an unused connection is kept before it is discarded. Table 3.7. JDBC connector runtime properties Property Default Description database.time_zone UTC Specifies the timezone used when inserting JDBC temporal values. delete.enabled false Specifies whether the connector processes DELETE or tombstone events and removes the corresponding row from the database. Use of this option requires that you set the primary.key.mode to record.key . truncate.enabled false Specifies whether the connector processes TRUNCATE events and truncates the corresponding tables from the database. Note Although support for TRUNCATE statements has been available in Db2 since version 9.7 , currently, the JDBC connector is unable to process standard TRUNCATE events that the Db2 connector emits. To ensure that the JDBC connector can process TRUNCATE events received from Db2, perform the truncation by using an alternative to the standard TRUNCATE TABLE statement. For example: ALTER TABLE <table_name> ACTIVATE NOT LOGGED INITIALLY WITH EMPTY TABLE The user account that submits the preceding query requires ALTER privileges on the table to be truncated. insert.mode insert Specifies the strategy used to insert events into the database. The following options are available: insert Specifies that all events should construct INSERT -based SQL statements. Use this option only when no primary key is used, or when you can be certain that no updates can occur to rows with existing primary key values. update Specifies that all events should construct UPDATE -based SQL statements. Use this option only when you can be certain that the connector receives only events that apply to existing rows. upsert Specifies that the connector adds events to the table using upsert semantics. That is, if the primary key does not exist, the connector performs an INSERT operation, and if the key does exist, the connector performs an UPDATE operation. When idempotent writes are required, the connector should be configured to use this option. primary.key.mode none Specifies how the connector resolves the primary key columns from the event. none Specifies that no primary key columns are created. kafka Specifies that the connector uses Kafka coordinates as the primary key columns. The key coordinates are defined from the topic name, partition, and offset of the event, and are mapped to columns with the following names: __connect_topic __connect_partition __connect_offset record_key Specifies that the primary key columns are sourced from the event's record key. If the record key is a primitive type, the primary.key.fields property is required to specify the name of the primary key column. If the record key is a struct type, the primary.key.fields property is optional, and can be used to specify a subset of columns from the event's key as the table's primary key. record_value Specifies that the primary key columns is sourced from the event's value. You can set the primary.key.fields property to define the primary key as a subset of fields from the event's value; otherwise all fields are used by default. primary.key.fields No default Either the name of the primary key column or a comma-separated list of fields to derive the primary key from. When primary.key.mode is set to record_key and the event's key is a primitive type, it is expected that this property specifies the column name to be used for the key. When the primary.key.mode is set to record_key with a non-primitive key, or record_value , it is expected that this property specifies a comma-separated list of field names from either the key or value. If the primary.key.mode is set to record_key with a non-primitive key, or record_value , and this property is not specified, the connector derives the primary key from all fields of either the record key or record value, depending on the specified mode. quote.identifiers false Specifies whether generated SQL statements use quotation marks to delimit table and column names. See the Section 3.1.1.9, "Specifying options to define the letter case of destination table and column names" section for more details. schema.evolution none Specifies how the connector evolves the destination table schemas. For more information, see Section 3.1.1.8, "Schema evolution modes for the Debezium JDBC connector" . The following options are available: none Specifies that the connector does not evolve the destination schema. basic Specifies that basic evolution occurs. The connector adds missing columns to the table by comparing the incoming event's record schema to the database table structure. table.name.format USD{topic} Specifies a string pattern that the connector uses to construct the names of destination tables. When the property is set to its default value, USD{topic} , after the connector reads an event from Kafka, it writes the event record to a destination table with a name that matches the name of the source topic. You can also configure this property to extract values from specific fields in incoming event records and then use those values to dynamically generate the names of target tables. This ability to generate table names from values in the message source would otherwise require the use of a custom Kafka Connect single message transformation (SMT). To configure the property to dynamically generate the names of destination tables, set its value to a pattern such as USD{source._field_} . When you specify this type of pattern, the connector extracts values from the source block of the Debezium change event, and then uses those values to construct the table name. For example, you might set the value of the property to the pattern USD{source.schema}_USD{source.table} . Based on this pattern, if the connector reads an event in which the schema field in the source block contains the value, user , and the table field contains the value, tab , the connector writes the event record to a table with the name user_tab . dialect.postgres.postgis.schema public Specifies the schema name where the PostgreSQL PostGIS extension is installed. The default is public ; however, if the PostGIS extension was installed in another schema, this property should be used to specify the alternate schema name. dialect.sqlserver.identity.insert false Specifies whether the connector automatically sets an IDENTITY_INSERT before an INSERT or UPSERT operation into the identity column of SQL Server tables, and then unsets it immediately after the operation. When the default setting ( false ) is in effect, an INSERT or UPSERT operation into the IDENTITY column of a table results in a SQL exception. batch.size 500 Specifies how many records to attempt to batch together into the destination table. Note Note that if you set consumer.max.poll.records in the Connect worker properties to a value lower than batch.size , batch processing will be caped by consumer.max.poll.records and the desired batch.size won't be reached. You can also configure the connector's underlying consumer's max.poll.records using consumer.override.max.poll.records in the connector configuration. use.reduction.buffer false Specifies whether to enable the Debezium JDBC connector's reduction buffer. Choose one of the following settings: false (default) The connector writes each change event that it consumes from Kafka as a separate logical SQL change. true The connector uses the reduction buffer to reduce change events before it writes them to the sink database. That is, if multiple events refer to the same primary key, the connector consolidates the SQL queries and writes only a single logical SQL change, based on the row state that is reported in the most recent offset record. Choose this option to reduce the SQL load on the target database. To optimize query processing in a PostgreSQL sink database when the reduction buffer is enabled, you must also enable the database to execute the batched queries by adding the reWriteBatchedInserts parameter to the JDBC connection URL. field.include.list empty string An optional, comma-separated list of field names that match the fully-qualified names of fields to include from the change event value. Fully-qualified names for fields are of the form fieldName or topicName :_fieldName_ . If you include this property in the configuration, do not set the field.exclude.list property. field.exclude.list empty string An optional, comma-separated list of field names that match the fully-qualified names of fields to exclude from the change event value. Fully-qualified names for fields are of the form fieldName or topicName :_fieldName_ . If you include this property in the configuration, do not set the field.include.list property. Table 3.8. JDBC connector extendable properties Property Default Description column.naming.strategy i.d.c.j.n.DefaultColumnNamingStrategy Specifies the fully-qualified class name of a ColumnNamingStrategy implementation that the connector uses to resolve column names from event field names. By default, the connector uses the field name as the column name. table.naming.strategy i.d.c.j.n.DefaultTableNamingStrategy Specifies the fully-qualified class name of a TableNamingStrategy implementation that the connector uses to resolve table names from incoming event topic names. The default behavior is to: Replace the USD{topic} placeholder in the table.name.format configuration property with the event's topic. Sanitize the table name by replacing dots ( . ) with underscores ( _ ). JDBC connector hibernate.* passthrough properties Kafka Connect supports passthrough configuration, enabling you to modify the behavior of an underlying system by passing certain properties directly from the connector configuration. By default, some Hibernate properties are exposed via the JDBC connector connection properties (for example, connection.url , connection.username , and connection.pool.*_size ), and through the connector's runtime properties (for example, database.time_zone , quote.identifiers ). If you want to customize other Hibernate behavior, you can take advantage of the passthrough mechanism by adding properties that use the hibernate.* namespace to the connector configuration. For example, to assist Hibernate in resolving the type and version of the target database, you can add the hibernate.dialect property and set it to the fully qualified class name of the database, for example, org.hibernate.dialect.MariaDBDialect . 3.1.5. JDBC connector frequently asked questions Is the ExtractNewRecordState single message transformation required? No, that is actually one of the differentiating factors of the Debezium JDBC connector from other implementations. While the connector is capable of ingesting flattened events like its competitors, it can also ingest Debezium's complex change event structure natively, without requiring any specific type of transformation. If a column's type is changed, or if a column is renamed or dropped, is this handled by schema evolution? No, the Debezium JDBC connector does not make any changes to existing columns. The schema evolution supported by the connector is quite basic. It simply compares the fields in the event structure to the table's column list, and then adds any fields that are not yet defined as columns in the table. If a column's type or default value change, the connector does not adjust them in the destination database. If a column is renamed, the old column is left as-is, and the connector appends a column with the new name to the table; however existing rows with data in the old column remain unchanged. These types of schema changes should be handled manually. If a column's type does not resolve to the type that I want, how can I enforce mapping to a different data type? The Debezium JDBC connector uses a sophisticated type system to resolve a column's data type. For details about how this type system resolves a specific field's schema definition to a JDBC type, see the Section 3.1.1.4, "Description of Debezium JDBC connector data and column type mappings" section. If you want to apply a different data type mapping, define the table manually to explicitly obtain the preferred column type. How do you specify a prefix or a suffix to the table name without changing the Kafka topic name? In order to add a prefix or a suffix to the destination table name, adjust the table.name.format connector configuration property to apply the prefix or suffix that you want. For example, to prefix all table names with jdbc_ , specify the table.name.format configuration property with a value of jdbc_USD{topic} . If the connector is subscribed to a topic called orders , the resulting table is created as jdbc_orders . Why are some columns automatically quoted, even though identifier quoting is not enabled? In some situations, specific column or table names might be explicitly quoted, even when quote.identifiers is not enabled. This is often necessary when the column or table name starts with or uses a specific convention that would otherwise be considered illegal syntax. For example, when the primary.key.mode is set to kafka , some databases only permit column names to begin with an underscore if the column's name is quoted. Quoting behavior is dialect-specific, and varies among different types of database. | [
"{ \"hibernate.c3p0.idle_test_period\": \"300\" }",
"{ \"schema\": { \"type\": \"INT64\" } }",
"[ \"schema\": { \"type\": \"INT64\", \"name\": \"org.apache.kafka.connect.data.Date\" } ]",
"{ \"schema\": { \"type\": \"INT8\", \"parameters\": { \"__debezium.source.column.type\": \"TINYINT\", \"__debezium.source.column.length\": \"1\" } } }",
"{ \"name\": \"jdbc-connector\", 1 \"config\": { \"connector.class\": \"io.debezium.connector.jdbc.JdbcSinkConnector\", 2 \"tasks.max\": \"1\", 3 \"connection.url\": \"jdbc:postgresql://localhost/db\", 4 \"connection.username\": \"pguser\", 5 \"connection.password\": \"pgpassword\", 6 \"insert.mode\": \"upsert\", 7 \"delete.enabled\": \"true\", 8 \"primary.key.mode\": \"record_key\", 9 \"schema.evolution\": \"basic\", 10 \"database.time_zone\": \"UTC\", 11 \"topics\": \"orders\" 12 } }"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/debezium_user_guide/debezium-sink-connectors |
Chapter 6. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE | Chapter 6. Installing a cluster in an LPAR on IBM Z and IBM LinuxONE In OpenShift Container Platform version 4.16, you can install a cluster in a logical partition (LPAR) on IBM Z(R) or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z(R), all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 6.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 6.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 6.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.3.3. Minimum IBM Z system environment You can install OpenShift Container Platform version 4.16 on the following IBM(R) hardware: IBM(R) z16 (all models), IBM(R) z15 (all models), IBM(R) z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II Important When running OpenShift Container Platform on IBM Z(R) without a hypervisor use the Dynamic Partition Manager (DPM) to manage your machine. Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z(R). However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements Five logical partitions (LPARs) Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One machine for the temporary OpenShift Container Platform bootstrap machine IBM Z network connectivity requirements To install on IBM Z(R) in an LPAR, you need: A direct-attached OSA or RoCE network adapter For a preferred setup, use OSA link aggregation. Disk storage FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine Additional resources Processors Resource/Systems Manager Planning Guide in IBM(R) Documentation for PR/SM mode considerations. IBM Dynamic Partition Manager (DPM) Guide in IBM(R) Documentation for DPM mode considerations. Topics in LPAR performance for LPAR weight management and entitlements. Recommended host practices for IBM Z(R) & IBM(R) LinuxONE environments 6.3.4. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. HiperSockets that are attached to a node directly as a device. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a RHEL 8 guest to bridge to the HiperSockets network. Operating system requirements Three LPARs for OpenShift Container Platform control plane machines. At least six LPARs for OpenShift Container Platform compute machines. One machine or LPAR for the temporary OpenShift Container Platform bootstrap machine. IBM Z network connectivity requirements To install on IBM Z(R) in an LPAR, you need: A direct-attached OSA or RoCE network adapter For a preferred setup, use OSA link aggregation. Disk storage FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for Red Hat Enterprise Linux CoreOS (RHCOS) installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. FCP attached disk storage Storage / Main Memory 16 GB for OpenShift Container Platform control plane machines 8 GB for OpenShift Container Platform compute machines 16 GB for the temporary OpenShift Container Platform bootstrap machine 6.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 6.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an HTTP or HTTPS server to establish a network connection to download their Ignition config files. The machines are configured with static IP addresses. No DHCP server is required. Ensure that the machines have persistent IP addresses and hostnames. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 6.3.6.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 6.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 6.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 6.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . Additional resources Configuring chrony time service 6.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 6.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 6.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 6.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 6.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 6.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 6.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 6.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 6.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 6.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 6.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, preparing a web server for the Ignition files, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure Set up static IP addresses. Set up an HTTP or HTTPS server to provide Ignition files to the cluster nodes. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 6.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 6.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z(R) 6.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 6.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 6.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 6.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 6.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 6.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 6.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 6.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 6.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 6.12. Configuring NBDE with static IP in an IBM Z or IBM LinuxONE environment Enabling NBDE disk encryption in an IBM Z(R) or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.16.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 For installations on DASD-type disks, replace with device: /dev/disk/by-label/root . 3 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append \ ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none \ --dest-karg-append nameserver=<nameserver_ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 ignition.firstboot ignition.platform.id=metal \ coreos.inst.ignition_url=http://<http_server>/master.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \ 4 zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use master.ign or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 For installations on DASD-type disks, replace with rd.dasd=0.0.xxxx to specify the DASD device. Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 6.13. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) in an LPAR. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS guest machines have rebooted. Complete the following steps to create the machines. Prerequisites An HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. If you want to enable secure boot, you have obtained the appropriate Red Hat Product Signing Key and read Secure boot on IBM Z and IBM LinuxONE in IBM documentation. Procedure Log in to Linux on your provisioning machine. Obtain the Red Hat Enterprise Linux CoreOS (RHCOS) kernel, initramfs, and rootfs files from the RHCOS image mirror . Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel, initramfs, and rootfs artifacts described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Note The rootfs image is the same for FCP and DASD. Create parameter files. The following parameters are specific for a particular virtual machine: For ip= , specify the following seven entries: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. If you use static IP addresses, specify none . For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. Optional: To enable secure boot, add coreos.inst.secure_ipl For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. Leave all other parameters unchanged. Example parameter file, bootstrap-0.parm , for the bootstrap machine: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/<block_device> \ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ 3 coreos.inst.secure_ipl \ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 1 Specify the block device type. For installations on DASD-type disks, specify /dev/dasda . For installations on FCP-type disks, specify /dev/sda . 2 Specify the location of the Ignition config file. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. 3 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. 4 Optional: To enable secure boot, add coreos.inst.secure_ipl . Write all options in the parameter file as a single line and make sure you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. Leave all other parameters unchanged. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . The following is an example parameter file worker-1.parm for a compute node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to the LPAR, for example with FTP. For details about how to transfer the files with FTP and boot, see Booting the installation on IBM Z(R) to install RHEL in an LPAR . Boot the machine Repeat this procedure for the other machines in the cluster. 6.13.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 6.13.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Always set the fail_over_mac=1 option in active-backup mode, to avoid problems when shared OSA/RoCE cards are used. Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 6.14. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 6.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.16. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.17. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Configure the Operators that are not available. 6.17.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 6.17.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 6.17.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 6.18. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Verification If you have enabled secure boot during the OpenShift Container Platform bootstrap process, the following verification steps are required: Debug the node by running the following command: USD oc debug node/<node_name> chroot /host Confirm that secure boot is enabled by running the following command: USD cat /sys/firmware/ipl/secure Example output 1 1 1 The value is 1 if secure boot is enabled and 0 if secure boot is not enabled. List the re-IPL configuration by running the following command: # lsreipl Example output for an FCP disk Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: "" Bootparms: "" clear: 0 Example output for a DASD disk for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: "" clear: 0 Shut down the node by running the following command: sudo shutdown -h Initiate a boot from LPAR from the Hardware Management Console (HMC). See Initiating a secure boot from an LPAR in IBM documentation. When the node is back, check the secure boot status again. 6.19. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 6.20. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.16.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root 2 label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 3",
"coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial_number> --dest-karg-append ip=<ip_address>::<gateway_ip>:<subnet_mask>::<network_device>:none --dest-karg-append nameserver=<nameserver_ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<node_name>-initramfs.s390x.img",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 ignition.firstboot ignition.platform.id=metal coreos.inst.ignition_url=http://<http_server>/master.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 \\ 4 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/<block_device> \\ 1 coreos.inst.ignition_url=http://<http_server>/bootstrap.ign \\ 2 coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \\ 3 coreos.inst.secure_ipl \\ 4 ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/disk/by-id/scsi-<serial_number> coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 zfcp.allow_lun_scan=0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.29.4 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.16 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.16.0 True False False 19m baremetal 4.16.0 True False False 37m cloud-credential 4.16.0 True False False 40m cluster-autoscaler 4.16.0 True False False 37m config-operator 4.16.0 True False False 38m console 4.16.0 True False False 26m csi-snapshot-controller 4.16.0 True False False 37m dns 4.16.0 True False False 37m etcd 4.16.0 True False False 36m image-registry 4.16.0 True False False 31m ingress 4.16.0 True False False 30m insights 4.16.0 True False False 31m kube-apiserver 4.16.0 True False False 26m kube-controller-manager 4.16.0 True False False 36m kube-scheduler 4.16.0 True False False 36m kube-storage-version-migrator 4.16.0 True False False 37m machine-api 4.16.0 True False False 29m machine-approver 4.16.0 True False False 37m machine-config 4.16.0 True False False 36m marketplace 4.16.0 True False False 37m monitoring 4.16.0 True False False 29m network 4.16.0 True False False 38m node-tuning 4.16.0 True False False 37m openshift-apiserver 4.16.0 True False False 32m openshift-controller-manager 4.16.0 True False False 30m openshift-samples 4.16.0 True False False 32m operator-lifecycle-manager 4.16.0 True False False 37m operator-lifecycle-manager-catalog 4.16.0 True False False 37m operator-lifecycle-manager-packageserver 4.16.0 True False False 32m service-ca 4.16.0 True False False 38m storage 4.16.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"oc debug node/<node_name> chroot /host",
"cat /sys/firmware/ipl/secure",
"1 1",
"lsreipl",
"Re-IPL type: fcp WWPN: 0x500507630400d1e3 LUN: 0x4001400e00000000 Device: 0.0.810e bootprog: 0 br_lba: 0 Loadparm: \"\" Bootparms: \"\" clear: 0",
"for DASD output: Re-IPL type: ccw Device: 0.0.525d Loadparm: \"\" clear: 0",
"sudo shutdown -h"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z-lpar |
Chapter 10. Replicating audit data in a JMS message broker | Chapter 10. Replicating audit data in a JMS message broker You can replicate KIE Server audit data to a Java Message Service (JMS) message broker, for example ActiveMQ or Artemis, and then dump the data in an external database schema so that you can improve the performance of your Spring Boot application by deleting the audit data from your application schema. If you configure your application to replicate data in a message broker, when an event occurs in KIE Server the record of that event is stored in the KIE Server database schema and it is sent to the message broker. You can then configure an external service to consume the message broker data into an exact replica of the application's database schema. The data is appended in the message broker and the external database every time an event is produce by KIE Server. Note Only audit data is stored in the message broker. No other data is replicated. Prerequisites You have an existing Red Hat Decision Manager Spring Boot project. Procedure Open the Spring Boot application's pom.xml file in a text editor. Add the KIE Server Spring Boot audit dependency to the pom.xml file: <dependency> <groupId>org.kie</groupId> <artifactId>kie-server-spring-boot-autoconfiguration-audit-replication</artifactId> <version>USD{version.org.kie}</version> </dependency> Add the dependency for your JMS client. The following example adds the Advanced Message Queuing Protocol (AMQP) dependency: <dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>2.2.6</version> </dependency> Add the JMS pool dependency: <dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> </dependency> To configure KIE Server audit replication to use queues, complete the following tasks: Add the following lines to your Spring Boot application's application.properties file: Add the properties required for your message broker client. The following example shows how to configure KIE Server for AMPQ, where <JMS_HOST_PORT> is the port that the broker listens on and <USERNAME> and <PASSWORD are the login credentials for the broker: Add the following lines to the application.properties file of the service that will consume the message broker data: Add the properties required for your message broker client to the application.properties file of the service that will consume the message broker data. The following example shows how to configure KIE Server for AMPQ, where <JMS_HOST_PORT> is the port that your message broker listens on and <USERNAME> and <PASSWORD> are the login credentials for the message broker: To configure KIE Server audit replication to use topics, complete the following tasks: Add the following lines to your Spring Boot application's application.properties file: Add the properties required for your message broker client to the application.properties file of the service that will consume the message broker data. The following example shows how to configure KIE Server for AMPQ, where <JMS_HOST_PORT> is the port that your message broker listens on and <USERNAME> and <PASSWORD are the login credentials for the message broker: Add the following lines to the application.properties file of the service that will consume the message broker data: Add the properties required for your message broker client to the application.properties file of the service that will consume the message broker data. The following example shows how to configure KIE Server for AMPQ, where <JMS_HOST_PORT> is the port that your message broker listens on and <USERNAME> and <PASSWORD> are the login credentials for the message broker: Optional: To configure the KIE Server that contains the replicated data to be read only, set the org.kie.server.rest.mode.readonly property in the application.properties file to true : Additional resources Section 10.1, "Spring Boot JMS audit replication parameters" 10.1. Spring Boot JMS audit replication parameters The following table describes the parameters used to configure JMS audit replication for Red Hat Decision Manager applications on Spring Boot. Table 10.1. Spring Boot JMS audit replication parameters Parameter Values Description kieserver.audit-replication.producer true, false Specifies whether the business application will act as a producer to replicate and send the JMS messages to either a queue or a topic. kieserver.audit-replication.consumer true, false Specifies whether the business application will act as a consumer to receive the JMS messages from either a queue or a topic. kieserver.audit-replication.queue string The name of the JMS queue to either send or consume messages. kieserver.audit-replication.topic string The name of the JMS topic to either send or consume messages. kieserver.audit-replication.topic.subscriber string The name of the topic subscriber. org.kie.server.rest.mode.readonly true, false Specifies read only mode for the business application. | [
"<dependency> <groupId>org.kie</groupId> <artifactId>kie-server-spring-boot-autoconfiguration-audit-replication</artifactId> <version>USD{version.org.kie}</version> </dependency>",
"<dependency> <groupId>org.amqphub.spring</groupId> <artifactId>amqp-10-jms-spring-boot-starter</artifactId> <version>2.2.6</version> </dependency>",
"<dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> </dependency>",
"kieserver.audit-replication.producer=true kieserver.audit-replication.queue=audit-queue",
"amqphub.amqp10jms.remote-url=amqp://<JMS_HOST_PORT> amqphub.amqp10jms.username=<USERNAME> amqphub.amqp10jms.password=<PASSWORD> amqphub.amqp10jms.pool.enabled=true",
"kieserver.audit-replication.consumer=true kieserver.audit-replication.queue=audit-queue",
"amqphub.amqp10jms.remote-url=amqp://<JMS_HOST_PORT> amqphub.amqp10jms.username=<USERNAME> amqphub.amqp10jms.password=<PASSWORD> amqphub.amqp10jms.pool.enabled=true",
"kieserver.audit-replication.producer=true kieserver.audit-replication.topic=audit-topic",
"spring.jms.pub-sub-domain=true amqphub.amqp10jms.remote-url=amqp://<JMS_HOST_PORT> amqphub.amqp10jms.username=<USERNAME> amqphub.amqp10jms.password=<PASSWORD> amqphub.amqp10jms.pool.enabled=true",
"kieserver.audit-replication.consumer=true kieserver.audit-replication.topic=audit-topic::jbpm kieserver.audit-replication.topic.subscriber=jbpm spring.jms.pub-sub-domain=true",
"amqphub.amqp10jms.remote-url=amqp://<JMS_HOST_PORT> amqphub.amqp10jms.username=<USERNAME> amqphub.amqp10jms.password=<PASSWORD> amqphub.amqp10jms.pool.enabled=true amqphub.amqp10jms.clientId=jbpm",
"org.kie.server.rest.mode.readonly=true"
]
| https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/integrating_red_hat_decision_manager_with_other_products_and_components/spring-boot-jms-audit-proc_business-applications |
Chapter 8. Using OPA policy-based authorization | Chapter 8. Using OPA policy-based authorization Open Policy Agent (OPA) is an open-source policy engine. You can integrate OPA with Streams for Apache Kafka to act as a policy-based authorization mechanism for permitting client operations on Kafka brokers. When a request is made from a client, OPA will evaluate the request against policies defined for Kafka access, then allow or deny the request. Note Red Hat does not support the OPA server. Additional resources Open Policy Agent website 8.1. Defining OPA policies Before integrating OPA with Streams for Apache Kafka, consider how you will define policies to provide fine-grained access controls. You can define access control for Kafka clusters, consumer groups and topics. For instance, you can define an authorization policy that allows write access from a producer client to a specific broker topic. For this, the policy might specify the: User principal and host address associated with the producer client Operations allowed for the client Resource type ( topic ) and resource name the policy applies to Allow and deny decisions are written into the policy, and a response is provided based on the request and client identification data provided. In our example the producer client would have to satisfy the policy to be allowed to write to the topic. 8.2. Connecting to the OPA To enable Kafka to access the OPA policy engine to query access control policies, , you configure a custom OPA authorizer plugin ( kafka-authorizer-opa- VERSION .jar ) in your Kafka server.properties file. When a request is made by a client, the OPA policy engine is queried by the plugin using a specified URL address and a REST endpoint, which must be the name of the defined policy. The plugin provides the details of the client request - user principal, operation, and resource - in JSON format to be checked against the policy. The details will include the unique identity of the client; for example, taking the distinguished name from the client certificate if TLS authentication is used. OPA uses the data to provide a response - either true or false - to the plugin to allow or deny the request. 8.3. Configuring OPA authorization support This procedure describes how to configure Kafka brokers to use OPA authorization. Before you begin Consider the access you require or want to limit for certain users. You can use a combination of users and Kafka resources to define OPA policies. It is possible to set up OPA to load user information from an LDAP data source. Note Super users always have unconstrained access to a Kafka broker regardless of the authorization implemented on the Kafka broker. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. An OPA server must be available for connection. The OPA authorizer plugin for Kafka . Procedure Write the OPA policies required for authorizing client requests to perform operations on the Kafka brokers. See Defining OPA policies . Now configure the Kafka brokers to use OPA. Install the OPA authorizer plugin for Kafka . See Connecting to the OPA . Make sure that the plugin files are included in the Kafka classpath. Add the following to the Kafka server.properties configuration file to enable the OPA plugin: authorizer.class.name: com.bisnode.kafka.authorization.OpaAuthorizer Add further configuration to server.properties for the Kafka brokers to access the OPA policy engine and policies. For example: opa.authorizer.url=https:// OPA-ADDRESS /allow 1 opa.authorizer.allow.on.error=false 2 opa.authorizer.cache.initial.capacity=50000 3 opa.authorizer.cache.maximum.size=50000 4 opa.authorizer.cache.expire.after.seconds=600000 5 super.users=User:alice;User:bob 6 1 (Required) The OAuth 2.0 token endpoint URL for the policy the authorizer plugin will query. In this example, the policy is called allow . 2 Flag to specify whether a client is allowed or denied access by default if the authorizer plugin fails to connect with the OPA policy engine. 3 Initial capacity in bytes of the local cache. The cache is used so that the plugin does not have to query the OPA policy engine for every request. 4 Maximum capacity in bytes of the local cache. 5 Time in milliseconds that the local cache is refreshed by reloading from the OPA policy engine. 6 A list of user principals treated as super users, so that they are always allowed without querying the Open Policy Agent policy. Refer to the Open Policy Agent website for information on authentication and authorization options. Verify the configured permissions by accessing Kafka brokers using clients that have and do not have the correct authorization. | [
"authorizer.class.name: com.bisnode.kafka.authorization.OpaAuthorizer",
"opa.authorizer.url=https:// OPA-ADDRESS /allow 1 opa.authorizer.allow.on.error=false 2 opa.authorizer.cache.initial.capacity=50000 3 opa.authorizer.cache.maximum.size=50000 4 opa.authorizer.cache.expire.after.seconds=600000 5 super.users=User:alice;User:bob 6"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-opa-authorization_str |
Part III. Access control | Part III. Access control As a 3scale application programming interface (API) provider, you control access to your API by designating methods for which to capture individual usage details, defining metrics, associating methods and metrics with mapping rules, and creating application plans that set prices and usage limits. Applications plans use data collected for methods and metrics to enforce usage limits. | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/access_control |
Chapter 9. application | Chapter 9. application This chapter describes the commands under the application command. 9.1. application credential create Create new application credential Usage: Table 9.1. Positional arguments Value Summary <name> Name of the application credential Table 9.2. Command arguments Value Summary -h, --help Show this help message and exit --secret <secret> Secret to use for authentication (if not provided, one will be generated) --role <role> Roles to authorize (name or id) (repeat option to set multiple values) --expiration <expiration> Sets an expiration date for the application credential, format of YYYY-mm-ddTHH:MM:SS (if not provided, the application credential will not expire) --description <description> Application credential description --unrestricted Enable application credential to create and delete other application credentials and trusts (this is potentially dangerous behavior and is disabled by default) --restricted Prohibit application credential from creating and deleting other application credentials and trusts (this is the default behavior) --access-rules <access-rules> Either a string or file path containing a json- formatted list of access rules, each containing a request method, path, and service, for example [{"method": "GET", "path": "/v2.1/servers", "service": "compute"}] Table 9.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 9.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 9.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 9.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 9.2. application credential delete Delete application credentials(s) Usage: Table 9.7. Positional arguments Value Summary <application-credential> Application credentials(s) to delete (name or id) Table 9.8. Command arguments Value Summary -h, --help Show this help message and exit 9.3. application credential list List application credentials Usage: Table 9.9. Command arguments Value Summary -h, --help Show this help message and exit --user <user> User whose application credentials to list (name or ID) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 9.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 9.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 9.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 9.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 9.4. application credential show Display application credential details Usage: Table 9.14. Positional arguments Value Summary <application-credential> Application credential to display (name or id) Table 9.15. Command arguments Value Summary -h, --help Show this help message and exit Table 9.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 9.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 9.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 9.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack application credential create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--secret <secret>] [--role <role>] [--expiration <expiration>] [--description <description>] [--unrestricted] [--restricted] [--access-rules <access-rules>] <name>",
"openstack application credential delete [-h] <application-credential> [<application-credential> ...]",
"openstack application credential list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>]",
"openstack application credential show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <application-credential>"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/application |
1.3. LVS Scheduling Overview | 1.3. LVS Scheduling Overview One of the advantages of using LVS is its ability to perform flexible, IP-level load balancing on the real server pool. This flexibility is due to the variety of scheduling algorithms an administrator can choose from when configuring LVS. LVS load balancing is superior to less flexible methods, such as Round-Robin DNS where the hierarchical nature of DNS and the caching by client machines can lead to load imbalances. Additionally, the low-level filtering employed by the LVS router has advantages over application-level request forwarding because balancing loads at the network packet level causes minimal computational overhead and allows for greater scalability. Using scheduling, the active router can take into account the real servers' activity and, optionally, an administrator-assigned weight factor when routing service requests. Using assigned weights gives arbitrary priorities to individual machines. Using this form of scheduling, it is possible to create a group of real servers using a variety of hardware and software combinations and the active router can evenly load each real server. The scheduling mechanism for LVS is provided by a collection of kernel patches called IP Virtual Server or IPVS modules. These modules enable layer 4 ( L4 ) transport layer switching, which is designed to work well with multiple servers on a single IP address. To track and route packets to the real servers efficiently, IPVS builds an IPVS table in the kernel. This table is used by the active LVS router to redirect requests from a virtual server address to and returning from real servers in the pool. The IPVS table is constantly updated by a utility called ipvsadm - adding and removing cluster members depending on their availability. 1.3.1. Scheduling Algorithms The structure that the IPVS table takes depends on the scheduling algorithm that the administrator chooses for any given virtual server. To allow for maximum flexibility in the types of services you can cluster and how these services are scheduled, Red Hat Enterprise Linux provides the following scheduling algorithms listed below. For instructions on how to assign scheduling algorithms refer to Section 4.6.1, "The VIRTUAL SERVER Subsection" . Round-Robin Scheduling Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. LVS round-robin scheduling also does not suffer the imbalances caused by cached DNS queries. Weighted Round-Robin Scheduling Distributes each request sequentially around the pool of real servers but gives more jobs to servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which is then adjusted upward or downward by dynamic load information. Refer to Section 1.3.2, "Server Weight and Scheduling" for more on weighting real servers. Weighted round-robin scheduling is a preferred choice if there are significant differences in the capacity of real servers in the pool. However, if the request load varies dramatically, the more heavily weighted server may answer more than its share of requests. Least-Connection Distributes more requests to real servers with fewer active connections. Because it keeps track of live connections to the real servers through the IPVS table, least-connection is a type of dynamic scheduling algorithm, making it a better choice if there is a high degree of variation in the request load. It is best suited for a real server pool where each member node has roughly the same capacity. If a group of servers have different capabilities, weighted least-connection scheduling is a better choice. Weighted Least-Connections (default) Distributes more requests to servers with fewer active connections relative to their capacities. Capacity is indicated by a user-assigned weight, which is then adjusted upward or downward by dynamic load information. The addition of weighting makes this algorithm ideal when the real server pool contains hardware of varying capacity. Refer to Section 1.3.2, "Server Weight and Scheduling" for more on weighting real servers. Locality-Based Least-Connection Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is designed for use in a proxy-cache server cluster. It routes the packets for an IP address to the server for that address unless that server is above its capacity and has a server in its half load, in which case it assigns the IP address to the least loaded real server. Locality-Based Least-Connection Scheduling with Replication Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is also designed for use in a proxy-cache server cluster. It differs from Locality-Based Least-Connection Scheduling by mapping the target IP address to a subset of real server nodes. Requests are then routed to the server in this subset with the lowest number of connections. If all the nodes for the destination IP are above capacity, it replicates a new server for that destination IP address by adding the real server with the least connections from the overall pool of real servers to the subset of real servers for that destination IP. The most loaded node is then dropped from the real server subset to prevent over-replication. Destination Hash Scheduling Distributes requests to the pool of real servers by looking up the destination IP in a static hash table. This algorithm is designed for use in a proxy-cache server cluster. Source Hash Scheduling Distributes requests to the pool of real servers by looking up the source IP in a static hash table. This algorithm is designed for LVS routers with multiple firewalls. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-scheduling-VSA |
Chapter 21. Atomic Host and Containers | Chapter 21. Atomic Host and Containers Red Hat Enterprise Linux Atomic Host Red Hat Enterprise Linux Atomic Host is a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/atomic_host_and_containers |
Chapter 379. XStream DataFormat | Chapter 379. XStream DataFormat Available as of Camel version 1.3 XStream is a Data Format which uses the XStream library to marshal and unmarshal Java objects to and from XML. To use XStream in your camel routes you need to add the a dependency on camel-xstream which implements this data format. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xstream</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 379.1. Options The XStream dataformat supports 10 options, which are listed below. Name Default Java Type Description permissions String Adds permissions that controls which Java packages and classes XStream is allowed to use during unmarshal from xml/json to Java beans. A permission must be configured either here or globally using a JVM system property. The permission can be specified in a syntax where a plus sign is allow, and minus sign is deny. Wildcards is supported by using . as prefix. For example to allow com.foo and all subpackages then specfy com.foo.. Multiple permissions can be configured separated by comma, such as com.foo.,-com.foo.bar.MySecretBean. The following default permission is always included: -,java.lang.,java.util. unless its overridden by specifying a JVM system property with they key org.apache.camel.xstream.permissions. encoding String Sets the encoding to use driver String To use a custom XStream driver. The instance must be of type com.thoughtworks.xstream.io.HierarchicalStreamDriver driverRef String To refer to a custom XStream driver to lookup in the registry. The instance must be of type com.thoughtworks.xstream.io.HierarchicalStreamDriver mode String Mode for dealing with duplicate references The possible values are: NO_REFERENCES ID_REFERENCES XPATH_RELATIVE_REFERENCES XPATH_ABSOLUTE_REFERENCES SINGLE_NODE_XPATH_RELATIVE_REFERENCES SINGLE_NODE_XPATH_ABSOLUTE_REFERENCES converters List List of class names for using custom XStream converters. The classes must be of type com.thoughtworks.xstream.converters.Converter aliases Map Alias a Class to a shorter name to be used in XML elements. omitFields Map Prevents a field from being serialized. To omit a field you must always provide the declaring type and not necessarily the type that is converted. implicitCollections Map Adds a default implicit collection which is used for any unmapped XML tag. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 379.2. Spring Boot Auto-Configuration The component supports 31 options, which are listed below. Name Description Default Type camel.dataformat.json-xstream.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.json-xstream.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.json-xstream.collection-type-name Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.json-xstream.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.json-xstream.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma String camel.dataformat.json-xstream.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma String camel.dataformat.json-xstream.enable-jaxb-annotation-module Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. false Boolean camel.dataformat.json-xstream.enabled Enable json-xstream dataformat true Boolean camel.dataformat.json-xstream.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL String camel.dataformat.json-xstream.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations Class camel.dataformat.json-xstream.library Which json library to use. JsonLibrary camel.dataformat.json-xstream.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.json-xstream.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.json-xstream.object-mapper Lookup and use the existing ObjectMapper with the given id when using Jackson. String camel.dataformat.json-xstream.permissions Adds permissions that controls which Java packages and classes XStream is allowed to use during unmarshal from xml/json to Java beans. A permission must be configured either here or globally using a JVM system property. The permission can be specified in a syntax where a plus sign is allow, and minus sign is deny. Wildcards is supported by using . as prefix. For example to allow com.foo and all subpackages then specfy com.foo.. Multiple permissions can be configured separated by comma, such as com.foo.,-com.foo.bar.MySecretBean. The following default permission is always included: -,java.lang.,java.util. unless its overridden by specifying a JVM system property with they key org.apache.camel.xstream.permissions. String camel.dataformat.json-xstream.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.json-xstream.timezone If set then Jackson will use the Timezone when marshalling/unmarshalling. This option will have no effect on the others Json DataFormat, like gson, fastjson and xstream. String camel.dataformat.json-xstream.unmarshal-type-name Class name of the java type to use when unarmshalling String camel.dataformat.json-xstream.use-default-object-mapper Whether to lookup and use default Jackson ObjectMapper from the registry. true Boolean camel.dataformat.json-xstream.use-list To unarmshal to a List of Map or a List of Pojo. false Boolean camel.dataformat.xstream.aliases Alias a Class to a shorter name to be used in XML elements. Map camel.dataformat.xstream.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.xstream.converters List of class names for using custom XStream converters. The classes must be of type com.thoughtworks.xstream.converters.Converter List camel.dataformat.xstream.driver To use a custom XStream driver. The instance must be of type com.thoughtworks.xstream.io.HierarchicalStreamDriver String camel.dataformat.xstream.driver-ref To refer to a custom XStream driver to lookup in the registry. The instance must be of type com.thoughtworks.xstream.io.HierarchicalStreamDriver String camel.dataformat.xstream.enabled Enable xstream dataformat true Boolean camel.dataformat.xstream.encoding Sets the encoding to use String camel.dataformat.xstream.implicit-collections Adds a default implicit collection which is used for any unmapped XML tag. Map camel.dataformat.xstream.mode Mode for dealing with duplicate references The possible values are: NO_REFERENCES ID_REFERENCES XPATH_RELATIVE_REFERENCES XPATH_ABSOLUTE_REFERENCES SINGLE_NODE_XPATH_RELATIVE_REFERENCES SINGLE_NODE_XPATH_ABSOLUTE_REFERENCES String camel.dataformat.xstream.omit-fields Prevents a field from being serialized. To omit a field you must always provide the declaring type and not necessarily the type that is converted. Map camel.dataformat.xstream.permissions Adds permissions that controls which Java packages and classes XStream is allowed to use during unmarshal from xml/json to Java beans. A permission must be configured either here or globally using a JVM system property. The permission can be specified in a syntax where a plus sign is allow, and minus sign is deny. Wildcards is supported by using . as prefix. For example to allow com.foo and all subpackages then specfy com.foo.. Multiple permissions can be configured separated by comma, such as com.foo.,-com.foo.bar.MySecretBean. The following default permission is always included: -,java.lang.,java.util. unless its overridden by specifying a JVM system property with they key org.apache.camel.xstream.permissions. String ND 379.3. Using the Java DSL // lets turn Object messages into XML then send to MQSeries from("activemq:My.Queue"). marshal().xstream(). to("mqseries:Another.Queue"); If you would like to configure the XStream instance used by the Camel for the message transformation, you can simply pass a reference to that instance on the DSL level. XStream xStream = new XStream(); xStream.aliasField("money", PurchaseOrder.class, "cash"); // new Added setModel option since Camel 2.14 xStream.setModel("NO_REFERENCES"); ... from("direct:marshal"). marshal(new XStreamDataFormat(xStream)). to("mock:marshaled"); 379.4. XMLInputFactory and XMLOutputFactory The XStream library uses the javax.xml.stream.XMLInputFactory and javax.xml.stream.XMLOutputFactory , you can control which implementation of this factory should be used. The Factory is discovered using this algorithm: 1. Use the javax.xml.stream.XMLInputFactory , javax.xml.stream.XMLOutputFactory system property. 2. Use the lib/xml.stream.properties file in the JRE_HOME directory. 3. Use the Services API, if available, to determine the classname by looking in the META-INF/services/javax.xml.stream.XMLInputFactory , META-INF/services/javax.xml.stream.XMLOutputFactory files in jars available to the JRE. 4. Use the platform default XMLInputFactory,XMLOutputFactory instance. 379.5. How to set the XML encoding in Xstream DataFormat? From Camel 2.2.0, you can set the encoding of XML in Xstream DataFormat by setting the Exchange's property with the key Exchange.CHARSET_NAME , or setting the encoding property on Xstream from DSL or Spring config. from("activemq:My.Queue"). marshal().xstream("UTF-8"). to("mqseries:Another.Queue"); 379.6. Setting the type permissions of Xstream DataFormat In Camel, one can always use its own processing step in the route to filter and block certain XML documents to be routed to the XStream's unmarhall step. From Camel 2.16.1, 2.15.5, you can set XStream's type permissions to automatically allow or deny the instantiation of certain types. The default type permissions setting used by Camel denies all types except for those from java.lang and java.util packages. This setting can be changed by setting System property org.apache.camel.xstream.permissions. Its value is a string of comma-separated permission terms, each representing a type being allowed or denied, depending on whether the term is prefixed with '' (note '' may be omitted) or with '-', respectively. Each term may contain a wildcard character ' '. For example, value "- ,java.lang. ,java.util. " indicates denying all types except for java.lang.* and java.util.* classes. Setting this value to an empty string "" reverts to the default XStream's type permissions handling which denies certain blacklisted classes and allow others. The type permissions setting can be extended at an individual XStream DataFormat instance by setting its type permissions property. <dataFormats> <xstream id="xstream-default" permissions="org.apache.camel.samples.xstream.*"/> ... | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xstream</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"// lets turn Object messages into XML then send to MQSeries from(\"activemq:My.Queue\"). marshal().xstream(). to(\"mqseries:Another.Queue\");",
"XStream xStream = new XStream(); xStream.aliasField(\"money\", PurchaseOrder.class, \"cash\"); // new Added setModel option since Camel 2.14 xStream.setModel(\"NO_REFERENCES\"); from(\"direct:marshal\"). marshal(new XStreamDataFormat(xStream)). to(\"mock:marshaled\");",
"from(\"activemq:My.Queue\"). marshal().xstream(\"UTF-8\"). to(\"mqseries:Another.Queue\");",
"<dataFormats> <xstream id=\"xstream-default\" permissions=\"org.apache.camel.samples.xstream.*\"/>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/xstream-dataformat |
Index | Index Symbols /etc/sysconfig/ha/lvs.cf file, /etc/sysconfig/ha/lvs.cf A arptables_jf, Direct Routing and arptables_jf C chkconfig, Configuring Services on the LVS Routers cluster using LVS with Red Hat Cluster, Using LVS with Red Hat Cluster components of LVS, LVS Components D direct routing and arptables_jf, Direct Routing and arptables_jf F feedback, Feedback FTP, Configuring FTP (see also LVS) I introduction, Introduction other Red Hat Enterprise Linux documents, Introduction iptables, Configuring Services on the LVS Routers ipvsadm program, ipvsadm J job scheduling, LVS, LVS Scheduling Overview L least connections (see job scheduling, LVS) LVS /etc/sysconfig/ha/lvs.cf file, /etc/sysconfig/ha/lvs.cf components of, LVS Components daemon, lvs date replication, real servers, Data Replication and Data Sharing Between Real Servers direct routing and arptables_jf, Direct Routing and arptables_jf requirements, hardware, Direct Routing , LVS via Direct Routing requirements, network, Direct Routing , LVS via Direct Routing requirements, software, Direct Routing , LVS via Direct Routing initial configuration, Initial LVS Configuration ipvsadm program, ipvsadm job scheduling, LVS Scheduling Overview lvs daemon, lvs LVS routers configuring services, Initial LVS Configuration necessary services, Configuring Services on the LVS Routers primary node, Initial LVS Configuration multi-port services, Multi-port Services and LVS FTP, Configuring FTP nanny daemon, nanny NAT routing enabling, Enabling NAT Routing on the LVS Routers requirements, hardware, The NAT LVS Network requirements, network, The NAT LVS Network requirements, software, The NAT LVS Network overview of, Linux Virtual Server Overview packet forwarding, Turning on Packet Forwarding Piranha Configuration Tool, Piranha Configuration Tool pulse daemon, pulse real servers, Linux Virtual Server Overview routing methods NAT, Routing Methods routing prerequisites, Configuring Network Interfaces for LVS with NAT scheduling, job, LVS Scheduling Overview send_arp program, send_arp shared data, Data Replication and Data Sharing Between Real Servers starting LVS, Starting LVS synchronizing configuration files, Synchronizing Configuration Files three-tier Red Hat Cluster Manager, A Three-Tier LVS Configuration using LVS with Red Hat Cluster, Using LVS with Red Hat Cluster lvs daemon, lvs M multi-port services, Multi-port Services and LVS (see also LVS) N nanny daemon, nanny NAT enabling, Enabling NAT Routing on the LVS Routers routing methods, LVS, Routing Methods network address translation (see NAT) P packet forwarding, Turning on Packet Forwarding (see also LVS) Piranha Configuration Tool, Piranha Configuration Tool CONTROL/MONITORING, CONTROL/MONITORING EDIT MONITORING SCRIPTS Subsection, EDIT MONITORING SCRIPTS Subsection GLOBAL SETTINGS, GLOBAL SETTINGS limiting access to, Limiting Access To the Piranha Configuration Tool login panel, Logging Into the Piranha Configuration Tool necessary software, Necessary Software overview of, Configuring the LVS Routers with Piranha Configuration Tool REAL SERVER subsection, REAL SERVER Subsection REDUNDANCY, REDUNDANCY setting a password, Setting a Password for the Piranha Configuration Tool VIRTUAL SERVER subsection, The VIRTUAL SERVER Subsection Firewall Mark, The VIRTUAL SERVER Subsection Persistence, The VIRTUAL SERVER Subsection Scheduling, The VIRTUAL SERVER Subsection Virtual IP Address, The VIRTUAL SERVER Subsection VIRTUAL SERVERS, VIRTUAL SERVERS piranha-gui service, Configuring Services on the LVS Routers piranha-passwd, Setting a Password for the Piranha Configuration Tool pulse daemon, pulse pulse service, Configuring Services on the LVS Routers R real servers configuring services, Configuring Services on the Real Servers Red Hat Cluster and LVS, Using LVS with Red Hat Cluster using LVS with, Using LVS with Red Hat Cluster round robin (see job scheduling, LVS) routing prerequisites for LVS, Configuring Network Interfaces for LVS with NAT S scheduling, job (LVS), LVS Scheduling Overview security Piranha Configuration Tool, Limiting Access To the Piranha Configuration Tool send_arp program, send_arp sshd service, Configuring Services on the LVS Routers synchronizing configuration files, Synchronizing Configuration Files W weighted least connections (see job scheduling, LVS) weighted round robin (see job scheduling, LVS) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/ix01 |
26.5. Loader Parameters | 26.5. Loader Parameters The following parameters can be defined in a parameter file but do not work in a CMS configuration file. To automate the loader screens, specify the following parameters: lang= language Sets the language of the installer user interface, for example, en for English or de for German. This automates the response to Choose a Language (refer to Section 22.3, "Language Selection" ). repo= installation_source Sets the installation source to access stage 2 as well as the repository with the packages to be installed. This automates the response to Installation Method (refer to Section 22.4, "Installation Method" ). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-parmfiles-Loader_parameters |
Chapter 14. Uninstalling a cluster on Azure | Chapter 14. Uninstalling a cluster on Azure You can remove a cluster that you deployed to Microsoft Azure. 14.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 14.2. Deleting Microsoft Azure resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Microsoft Azure (Azure) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on Azure that uses short-term credentials. Procedure Delete the Azure resources that ccoctl created by running the following command: USD ccoctl azure delete \ --name=<name> \ 1 --region=<azure_region> \ 2 --subscription-id=<azure_subscription_id> \ 3 --delete-oidc-resource-group 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <azure_region> is the Azure region in which to delete cloud resources. 3 <azure_subscription_id> is the Azure subscription ID for which to delete cloud resources. Verification To verify that the resources are deleted, query Azure. For more information, refer to Azure documentation. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl azure delete --name=<name> \\ 1 --region=<azure_region> \\ 2 --subscription-id=<azure_subscription_id> \\ 3 --delete-oidc-resource-group"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_azure/uninstalling-cluster-azure |
Chapter 6. Uninstalling a cluster on Azure Stack Hub | Chapter 6. Uninstalling a cluster on Azure Stack Hub You can remove a cluster that you deployed to Azure Stack Hub. 6.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_azure_stack_hub/uninstalling-cluster-azure-stack-hub |
Appendix A. Hardware and Network Protection | Appendix A. Hardware and Network Protection The best practice before deploying a machine into a production environment or connecting your network to the Internet is to determine your organizational needs and how security can fit into the requirements as transparently as possible. Since the main goal of the Security Guide is to explain how to secure Red Hat Enterprise Linux, a more detailed examination of hardware and physical network security is beyond the scope of this document. However, this chapter presents a brief overview of establishing security policies with respect to hardware and physical networks. Important factors to consider include how computing needs and connectivity requirements fit into the overall security strategy. The following explains some of these factors in detail. Computing involves more than just workstations running desktop software. Modern organizations require massive computational power and highly-available services, which can include mainframes, compute or application clusters, powerful workstations, and specialized appliances. With these organizational requirements, however, come increased susceptibility to hardware failure, natural disasters, and tampering or theft of equipment. Connectivity is the method by which an administrator intends to connect disparate resources to a network. An administrator may use Ethernet (hubbed or switched CAT-5/RJ-45 cabling), token ring, 10-base-2 coaxial cable, or even wireless (802.11 x ) technologies. Depending on which medium an administrator chooses, certain media and network topologies require complementary technologies such as hubs, routers, switches, base stations, and access points. Determining a functional network architecture allows an easier administrative process if security issues arise. From these general considerations, administrators can get a better view of implementation. The design of a computing environment can then be based on both organizational needs and security considerations - an implementation that evenly assesses both factors. A.1. Secure Network Topologies The foundation of a LAN is the topology , or network architecture. A topology is the physical and logical layout of a LAN in terms of resources provided, distance between nodes, and transmission medium. Depending upon the needs of the organization that the network services, there are several choices available for network implementation. Each topology has unique advantages and security issues that network architects should regard when designing their network layout. A.1.1. Physical Topologies As defined by the Institute of Electrical and Electronics Engineers (IEEE), there are three common topologies for the physical connection of a LAN. A.1.1.1. Ring Topology The Ring topology connects each node using exactly two connections. This creates a ring structure where each node is accessible to the other, either directly by its two physically closest neighboring nodes or indirectly through the physical ring. Token Ring, FDDI, and SONET networks are connected in this fashion (with FDDI utilizing a dual-ring technique); however, there are no common Ethernet connections using this physical topology, so rings are not commonly deployed except in legacy or institutional settings with a large installed base of nodes (for example, a university). A.1.1.2. Linear Bus Topology The linear bus topology consists of nodes which connect to a terminated main linear cable (the backbone). The linear bus topology requires the least amount of cabling and networking equipment, making it the most cost-effective topology. However, the linear bus depends on the backbone being constantly available, making it a single point-of-failure if it has to be taken off-line or is severed. Linear bus topologies are commonly used in peer-to-peer LANs using co-axial (coax) cabling and 50-93 ohm terminators at both ends of the bus. A.1.1.3. Star Topology The Star topology incorporates a central point where nodes connect and through which communication is passed. This central point, called a hub can be either broadcasted or switched . This topology does introduce a single point of failure in the centralized networking hardware that connects the nodes. However, because of this centralization, networking issues that affect segments or the entire LAN itself are easily traceable to this one source. A.1.2. Transmission Considerations Section A.1.1.3, "Star Topology" introduced the concept of broadcast and switched networking. There are several factors to consider when evaluating the type of networking hardware suitable and secure enough for your network environment. The following distinguishes these two distinct forms of networking. In a broadcast network, a node will send a packet that is received by every other node until the intended recipient accepts the packet. Every node in the network can conceivably receive this packet of data until the recipient processes the packet. In a broadcast network, all packets are sent in this manner. In a switched network, packets are not broadcasted, but are processed in the switched hub which, in turn, creates a direct connection between the sending and recipient nodes. This eliminates the need to broadcast packets to each node, thus lowering traffic overhead. The switched network also prevents packets from being intercepted by malicious nodes or users. In a broadcast network, where each node receives every packet on the way to its destination, malicious users can set their Ethernet device to promiscuous mode and accept all packets regardless of whether or not the data is intended for them. Once in promiscuous mode, a sniffer application can be used to filter, analyze, and reconstruct packets for passwords, personal data, and more. Sophisticated sniffer applications can store such information in text files and, perhaps, even send the information to arbitrary sources (for example, the malicious user's email address.) A switched network requires a network switch, a specialized piece of hardware that replaces the role of the traditional hub in which all nodes on a LAN are connected. Switches store MAC addresses of all nodes within an internal database, which it uses to perform its direct routing. Several manufacturers, including Cisco Systems, D-Link, SMC, and Netgear offer various types of switches with features such as 10/100-Base-T compatibility, gigabit Ethernet support, and IPv6 networking. A.1.3. Wireless Networks An emerging issue for enterprises today is that of mobility. Remote workers, field technicians, and executives require portable solutions, such as laptops, Personal Digital Assistants (PDAs), and wireless access to network resources. The IEEE has established a standards body for the 802.11 wireless specification, which establishes standards for wireless data communication throughout all industries. The currently approved IEEE standard is 802.11g for wireless networking, while 802.11a and 802.11b are legacy standards. The 802.11g standard is backwards-compatible with 802.11b, but is incompatible with 802.11a. The 802.11b and 802.11g specifications are actually a group of standards governing wireless communication and access control on the unlicensed 2.4GHz radio-frequency (RF) spectrum (802.11a uses the 5GHz spectrum). These specifications have been approved as standards by the IEEE, and several vendors market 802.11 x products and services. Consumers have also embraced the standard for small-office/home-office (SOHO) networks. The popularity has also extended from LANs to MANs (Metropolitan Area Networks), especially in populated areas where a concentration of wireless access points (WAPs) are available. There are also wireless Internet service providers (WISPs) that cater to frequent travelers requiring broadband Internet access to conduct business remotely. The 802.11 x specifications allow for direct, peer-to-peer connections between nodes with wireless NICs. This loose grouping of nodes, called an ad hoc network, is ideal for quick connection sharing between two or more nodes, but introduces scalability issues that are not suitable for dedicated wireless connectivity. A more suitable solution for wireless access in fixed structures is to install one or more WAPs that connect to the traditional network and allow wireless nodes to connect to the WAP as if it were on the Ethernet-based network. The WAP effectively acts as a bridge between the nodes connected to it and the rest of the network. A.1.3.1. 802.11 x Security Although wireless networking is comparable in speed and certainly more convenient than traditional wired networking mediums, there are some limitations to the specification that warrants thorough consideration. The most important of these limitations is in its security implementation. In the excitement of successfully deploying an 802.11 x network, many administrators fail to exercise even the most basic security precautions. Since all 802.11 x networking is done using high-band RF signals, the data transmitted is easily accessible to any user with a compatible NIC, a wireless network scanning tool such as NetStumbler or Wellenreiter , and common sniffing tools such as dsniff and snort . To prevent such aberrant usage of private wireless networks, the 802.11b standard uses the Wired Equivalent Privacy (WEP) protocol, which is an RC4-based 64- or 128-bit encrypted key shared between each node or between the WAP and the node. This key encrypts transmissions and decrypts incoming packets dynamically and transparently. Administrators often fail to employ this shared-key encryption scheme, however; either they forget to do so or choose not to do so because of performance degradation (especially over long distances). However, enabling WEP on a wireless network can greatly reduce the possibility of data interception. Red Hat Enterprise Linux supports various 802.11 x products from several vendors. The Network Administration Tool includes a facility for configuring wireless NICs and WEP security. For information about using the Network Administration Tool , refer to the System Administrators Guide . Relying on WEP, however, is still not a sufficiently sound means of protection against determined malicious users. There are specialized utilities specifically designed to crack the RC4 WEP encryption algorithm protecting a wireless network and to expose the shared key. AirSnort and WEP Crack are two such specialized applications. To protect against this, administrators should adhere to strict policies regarding usage of wireless methods to access sensitive information. Administrators may choose to augment the security of wireless connectivity by restricting it only to SSH or VPN connections, which introduce an additional encryption layer above the WEP encryption. Using this policy, a malicious user outside of the network that cracks the WEP encryption has to additionally crack the VPN or SSH encryption which, depending on the encryption method, can employ up to triple-strength 168-bit DES algorithm encryption (3DES), or proprietary algorithms of even greater strength. Administrators who apply these policies should restrict plain text protocols such as Telnet or FTP, as passwords and data can be exposed using any of the aforementioned attacks. A recent method of security and authentication that has been adopted by wireless networking equipment manufacturers is Wi-fi Protected Access ( WPA ). Administrators can configure WPA on their network by using an authentication server that manages keys for clients accessing the wireless network. WPA improves upon WEP encryption by using Temporal Key Integrity Protocol ( TKIP ), which is a method of using a shared key and associating it with the MAC address of the wireless network card installed on the client system. The value of the shared key and MAC address is then processed by an initialization vector ( IV ), which is used to generate a key that encrypts each data packet. The IV changes the key each time a packet is transferred, preventing most common wireless network attacks. However, WPA using TKIP is thought of as a temporary solution. Solutions using stronger encryption ciphers (such as AES) are under development, and have the potential to improve wireless network security in the enterprise. For more information about 802.11 standards, refer to the following URL: A.1.4. Network Segmentation and DMZs For administrators who want to run externally-accessible services such as HTTP, email, FTP, and DNS, it is recommended that these publicly available services be physically and/or logically segmented from the internal network. Firewalls and the hardening of hosts and applications are effective ways to deter casual intruders. However, determined crackers can find ways into the internal network if the services they have cracked reside on the same network segment. The externally accessible services should reside on what the security industry regards as a demilitarized zone (DMZ), a logical network segment where inbound traffic from the Internet would only be able to access those services and are not permitted to access the internal network. This is effective in that, even if a malicious user exploits a machine on the DMZ, the rest of the internal network lies behind a firewall on a separated segment. Most enterprises have a limited pool of publicly routable IP addresses from which they can host external services, so administrators utilize elaborate firewall rules to accept, forward, reject, and deny packet transmissions. Firewall policies implemented with iptables or using dedicated hardware firewalls allow for complex routing and forwarding rules. Administrators can use these policies to segment inbound traffic to specific services at specified addresses and ports while allowing only LAN access to internal services, which can prevent IP spoofing exploits. For more information about implementing iptables , refer to Chapter 7, Firewalls . | [
"http://standards.ieee.org/getieee802/802.11.html"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-netprot |
5.221. openssh | 5.221. openssh 5.221.1. RHBA-2012:1443 - openssh bug fix update Updated openssh packages that fix a bug are now available for Red Hat Enterprise Linux 6. [Updated 12 Nov 2012] This advisory has been updated with an accurate description for BZ#871127 to indicate that the nature of the bug is not architecture-specific. This update does not change the packages in any way. OpenSSH is OpenBSD's SSH (Secure Shell) protocol implementation. These packages include the core files necessary for both the OpenSSH client and server. Bug Fix BZ# 871127 When SELinux was disabled on the system, no on-disk policy was installed, an user account was used for a connection, and no "~/.ssh" configuration was present in that user's home directory, the ssh client could terminate unexpectedly with a segmentation fault when attempting to connect to another system. A patch has been provided to address this issue and the crashes no longer occur in the described scenario. All openssh users are advised to upgrade to these updated packages, which fix this bug. 5.221.2. RHSA-2012:0884 - Low: openssh security, bug fix, and enhancement update Updated openssh packages that fix one security issue, several bugs, and add various enhancements are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. OpenSSH is OpenBSD's Secure Shell (SSH) protocol implementation. These packages include the core files necessary for the OpenSSH client and server. Security Fix CVE-2011-5000 A denial of service flaw was found in the OpenSSH GSSAPI authentication implementation. A remote, authenticated user could use this flaw to make the OpenSSH server daemon (sshd) use an excessive amount of memory, leading to a denial of service. GSSAPI authentication is enabled by default ("GSSAPIAuthentication yes" in "/etc/ssh/sshd_config"). Bug Fixes BZ# 732955 SSH X11 forwarding failed if IPv6 was enabled and the parameter X11UseLocalhost was set to "no". Consequently, users could not set X forwarding. This update fixes sshd and ssh to correctly bind the port for the IPv6 protocol. As a result, X11 forwarding now works as expected with IPv6. BZ# 744236 The sshd daemon was killed by the OOM killer when running a stress test. Consequently, a user could not log in. With this update, the sshd daemon sets its oom_adj value to -17. As a result, sshd is not chosen by OOM killer and users are able to log in to solve problems with memory. BZ# 809619 If the SSH server is configured with a banner that contains a backslash character, then the client will escape it with another "\" character, so it prints double backslashes. An upstream patch has been applied to correct the problem and the SSH banner is now correctly displayed. Enhancements BZ# 657378 Previously, SSH allowed multiple ways of authentication of which only one was required for a successful login. SSH can now be set up to require multiple ways of authentication. For example, logging in to an SSH-enabled machine requires both a passphrase and a public key to be entered. The RequiredAuthentications1 and RequiredAuthentications2 options can be configured in the /etc/ssh/sshd_config file to specify authentications that are required for a successful login. For example, to set key and password authentication for SSH version 2, type: For more information on the aforementioned /etc/ssh/sshd_config options, refer to the sshd_config man page. BZ# 756929 Previously, OpenSSH could use the Advanced Encryption Standard New Instructions (AES-NI) instruction set only with the AES Cipher-block chaining (CBC) cipher. This update adds support for Counter (CTR) mode encryption in OpenSSH so the AES-NI instruction set can now be used efficiently also with the AES CTR cipher. BZ# 798241 Prior to this update, an unprivileged slave sshd process was run as the sshd_t context during privilege separation (privsep). sshd_t is the SELinux context used for running the sshd daemon. Given that the unprivileged slave process is run under the user's UID, it is fitting to run this process under the user's SELinux context instead of the privileged sshd_t context. With this update, the unprivileged slave process is now run as the user's context instead of the sshd_t context in accordance with the principle of privilege separation. The unprivileged process, which might be potentially more sensitive to security threats, is now run under the user's SELinux context. Users are advised to upgrade to these updated openssh packages, which contain backported patches to resolve these issues and add these enhancements. After installing this update, the OpenSSH server daemon (sshd) will be restarted automatically. | [
"echo \"RequiredAuthentications2 publickey,password\" >> /etc/ssh/sshd_config"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/openssh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.