title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 4. Alerts
|
Chapter 4. Alerts 4.1. Setting up alerts For internal Mode clusters, various alerts related to the storage metrics services, storage cluster, disk devices, cluster health, cluster capacity, and so on are displayed in the Block and File, and the object dashboards. These alerts are not available for external Mode. Note It might take a few minutes for alerts to be shown in the alert panel, because only firing alerts are visible in this panel. You can also view alerts with additional details and customize the display of Alerts in the OpenShift Container Platform. For more information, see Managing alerts .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/monitoring_openshift_data_foundation/alerts
|
Chapter 4. New features
|
Chapter 4. New features This part describes new features and major enhancements introduced in Red Hat Enterprise Linux 9.2. 4.1. Installer and image creation A new and improved way to create blueprints and images in the image builder web console With this enhancement, you have access to a unified version of the image builder tool and a significant improvement in your user experience. Notable enhancements in the image builder dashboard GUI include: You can now customize your blueprints with all the customizations previously supported only in the CLI, such as kernel, file system, firewall, locale, and other customizations. You can import blueprints by either uploading or dragging the blueprint in the .JSON or .TOML format and create images from the imported blueprint. You can also export or save your blueprints in the .JSON or .TOML format. Access to a blueprint list that you can sort, filter, and is case-sensitive. With the image builder dashboard, you can now access your blueprints, images, and sources by navigating through the following tabs: Blueprint - Under the Blueprint tab, you can now import, export, or delete your blueprints. Images - Under the Images tab, you can: Download images. Download image logs. Delete images. Sources - Under the Sources tab, you can: Download images. Download image logs. Create sources for images. Delete images. Jira:RHELPLAN-139448 Ability to create customized files and directories in the /etc directory With this enhancement, two new blueprint customizations are available. The [[customizations.files]] and the [[customizations.directories]] blueprint customizations enable you to create customized files and directories in the /etc directory of your image. Currently, you can use these customization only in the /etc directory. The [[customizations.directories]] enables you to: Create new directories Set user and group ownership for the directory Set the mode permission in the octal format With the [[customizations.files]] blueprint customizations you can: Create new files under the parent / directory Modifying existing files - this overrides the existing content Set user and group ownership for the file you are creating Set the mode permission in the octal format Note The new blueprint customizations are supported by all the image types, such as edge-container , edge-commit , among others. The customizations not supported in the blueprints used to create Installer images, such as edge-raw-image , edge-installer , and edge-simplified-installer . Jira:RHELPLAN-147428 Ability to specify user in a blueprint for simplified-installer images Previously, when creating a blueprint for a simplified-installer image, you could not specify a user in the blueprint customization, because the customization was not used and was discarded. With this update, when you create an image from the blueprint, this blueprint creates a user under the /usr/lib/passwd directory and a password under the /usr/etc/shadow directory during installation time. You can log in to the device with the username and the password you created for the blueprint. Note that after you access the system, you need to create users, for example, using the useradd command. Jira:RHELPLAN-149091 Support for 64-bit ARM for .vhd images built with image builder Previously, Microsoft Azure .vhd images created with the image builder tool were not supported on 64-bit ARM architectures. This update adds support for 64-bit ARM Microsoft Azure .vhd images and now you can build your .vhd images using image builder and upload them to the Microsoft Azure cloud. Jira:RHELPLAN-139424 Minimal RHEL installation now installs only the s390utils-core package In RHEL 8.4 and later, the s390utils-base package is split into an s390utils-core package and an auxiliary s390utils-base package. As a result, setting the RHEL installation to minimal-environment installs only the necessary s390utils-core package and not the auxiliary s390utils-base package. If you want to use the s390utils-base package with a minimal RHEL installation, you must manually install the package after completing the RHEL installation or explicitly install s390utils-base using a kickstart file. Bugzilla:1932480 4.2. RHEL for Edge Ignition support in RHEL for Edge Simplified images With this enhancement, you can add an Ignition file to the Simplified Installer images by customizing your blueprint. Both GUI and CLI have support for the Ignition customization. RHEL for Edge uses the Ignition provisioning utility to inject the user configuration into the images at an early stage of the boot process. On the first boot, Ignition reads its configuration either from a remote URL or a file embedded in the Simplified Installer image and applies that configuration into the image. Jira:RHELPLAN-139659 Simplified Installer images can now be composed without the FDO customization section in the blueprint Previously, to build a RHEL for Edge Simplified Installer image, you had to add details to the FIDO device onboarding (FDO) customization section. Otherwise, the image build would fail. With this update, the FDO customization in blueprints is now optional, and you can build RHEL for Edge Simplified Installer image with no errors. Jira:RHELPLAN-139655 Red Hat build of MicroShift enablement for RHEL for Edge images With this enhancement, you can enable Red Hat build of MicroShift services in a RHEL for Edge system. By using the [[customizations.firewalld.zones]] blueprint customization, you can add support for firewalld sources in the blueprint customization. For that, specify a name for the zone and a list of sources in that specific zone. Sources can be of the form source[/mask]|MAC|ipset:ipset . The following is a blueprint example on how to configure and customize support for Red Hat build of MicroShift services in a RHEL for Edge system. The Red Hat build of MicroShift installation requirements, such as firewall policies, MicroShift RPM, systemd service, enable you to create a deployment ready for production to achieve workload portability to a minimum field deployed edge device and by default LVM device mapper enablement. Jira:RHELPLAN-136489 4.3. Software management New dnf offline-upgrade command for offline updates on RHEL With this enhancement, you can apply offline updates to RHEL by using the new dnf offline-upgrade command from the DNF system-upgrade plug-in. Important The dnf system-upgrade command included in the system-upgrade plug-in is not supported on RHEL. Bugzilla:2131288 Applying advisory security filters to dnf offline-upgrade is now supported With this enhancement, the new functionality for advisories filtering has been added. As a result, you can now download packages and their dependencies only from the specified advisory by using the dnf offline-upgrade command with advisory security filters ( --advisory , --security , --bugfix , and other filters). Bugzilla:2139326 The unload_plugins function is now available for the DNF API With this enhancement, a new unload_plugins function has been added to the DNF API to allow plug-ins unloading. Important Note that you must first run the init_plugins function, and then run the unload_plugins function. Bugzilla:2121662 New --nocompression option for rpm2archive With this enhancement, the --nocompression option has been added to the rpm2archive utility. You can use this option to avoid compression when directly unpacking an RPM package. Bugzilla:2150804 4.4. Shells and command-line tools ReaR is now fully supported also on the 64-bit IBM Z architecture Basic Relax and Recover (ReaR) functionality, previously available on the 64-bit IBM Z architecture as a Technology Preview, is fully supported with the rear package version 2.6-17.el9 or later. You can create a ReaR rescue image on the IBM Z architecture in the z/VM environment only. Backing up and recovering logical partitions (LPARs) is not supported at the moment. ReaR supports saving and restoring disk layout only on Extended Count Key Data (ECKD) direct access storage devices (DASDs). Fixed Block Access (FBA) DASDs and SCSI disks attached through Fibre Channel Protocol (FCP) are not supported for this purpose. The only output method currently available is Initial Program Load (IPL), which produces a kernel and an initial ramdisk (initrd) compatible with the zIPL bootloader. For more information, see Using a ReaR rescue image on the 64-bit IBM Z architecture . Bugzilla:2046653 systemd rebased to version 252 The systemd package has been upgraded to version 252. Notable changes include: You can specify the default timeout when waiting for device units to activate by using the DefaultDeviceTimeoutSec= option in system.conf and user.conf files. At shutdown, systemd now logs about processes blocking unmounting of file systems. You can now use drop-ins for transient units too. You can use size suffixes, such as K, M, G, T and others in the ConditionMemory= option. You can list automount points by using the systemctl list-automounts command. You can use the systemd-logind utility to stop an idle session after a preconfigured timeout by using the StopIdleSessionSec= option. The systemd-udev utility now creates the infiniband by-path and infiniband by-ibdev links for Infiniband verbs devices. The systemd-tmpfiles utility now gracefully handles the absent source of C copy. The systemd-repart utility now generates dm-verity partitions, including signatures. Bugzilla:2217931 Updated systemd-udevd assigns consistent network device names to InfiniBand interfaces Introduced in RHEL 9, the new version of the systemd package contains the updated systemd-udevd device manager. The device manager changes the default names of InfiniBand interfaces to consistent names selected by systemd-udevd . You can define custom naming rules for naming InfiniBand interfaces by following the Renaming IPoIB devices procedure. For more details of the naming scheme, see the systemd.net-naming-scheme(7) man page. Bugzilla:2136937 4.5. Infrastructure services chrony rebased to version 4.3 The chrony suite has been updated to version 4.3. Notable enhancements over version 4.2 include: Added long-term quantile-based filtering of Network Time Protocol (NTP) measurements. You can enable this feature by adding the maxdelayquant option to the pool , server , or peer directive. Added the selection log to provide more information about chronyd selection of sources. You can enable the selection log by adding the selection option to the log directive. Improved synchronization stability when using the hardware timestamping and Pulse-Per-Second Hardware Clock (PHC) reference clocks. Added support for the system clock stabilization using a free-running stable clock, for example, Temperature Compensated Crystal Oscillator (TCXO), Oven-Controlled Crystal Oscillator (OCXO), or an atomic clock. Increased the maximum polling rate to 128 messages per second. Bugzilla:2133754 frr rebased to version 8.3.1 The frr package for managing dynamic routing stack has been updated to version 8.3.1. Notable changes over version 8.2.2 include: Added a new set of commands to interact with the Border Gateway Protocol (BGP): the set as-path replace command to replace the Autonomous System (AS) path attribute of a BGP route with a new value. the match peer command to match a specific BGP peer or group when configuring a BGP route map. the ead-es-frag evi-limit command to set a limit on the number of Ethernet A-D per EVI fragments that can be sent in a given period of time in EVPN. the match evpn route-type command to take specific actions on certain types of EVPN routes, such as route-target, route-distinguisher, or MAC/IP routes. Added the show thread timers command in the VTYSH command-line interface for interacting with FRR daemons. Added the show ip ospf reachable-routers command to display a list of routers that are currently reachable through the OSPF protocol. Added new commands to interact with the Protocol Independent Multicast (PIM) daemon: the debug igmp trace detail command to enable debugging for Internet Group Management Protocol (IGMP) messages with detailed tracing. the ip pim passive command to to configure the interface as passive, not sending PIM messages. Added new outputs for the show zebra command, such as ECMP, EVPN, MPLS statuses. Added the show ip nht mrib command to the ZEBRA component to display multicast-related information from the mroute table in the kernel. Bugzilla:2129731 vsftpd rebased to version 3.0.5 The Very Secure FTP Daemon ( vsftpd ) provides a secure method of transferring files between hosts. The vsftpd package has been updated to version 3.0.5. Notable changes and enhancements include the following SSL modernizations: By default, the vsftpd utility now requires the use of TLS version 1.2 or later for secure connections. The vsftpd utility is now compatible with the latest FileZilla client. Bugzilla:2018284 The frr package now contains targeted SELinux policy Due to the fast development of the frr package for managing dynamic routing stack, new features and access vector cache (AVC) issues arose frequently. With this enhancement, the SELinux rules are now packaged together with FRR to address any issues faster. SELinux adds an additional level of protection to the package by enforcing mandatory access control policies. Bugzilla:2129743 powertop rebased to version 2.15 The powertop package for improving the energy efficiency has been updated to version 2.15. Notable changes and enhancements include: Several Valgrind errors and possible buffer overrun have been fixed to improve the powertop tool stability. Improved compatibility with Ryzen processors and Kaby Lake platforms. Enabled Lake Field, Alder Lake N, and Raptor Lake platforms support. Enabled Ice Lake NNPI and Meteor Lake mobile and desktop support. Bugzilla:2044132 The systemd-sysusers utility is available in the chrony , dhcp , radvd , and squid packages The systemd-sysusers utility creates system users and groups during package installation and removes them during a removal of the package. With this enhancement, the following packages contain the systemd-sysusers utility in their scriptlets: chrony , dhcp , radvd , squid . Jira:RHELPLAN-136485 New synce4l package for frequency synchronization is now available SyncE (Synchronous Ethernet) is a hardware feature that enables PTP clocks to achieve precise synchronization of frequency at the physical layer. SyncE is supported in certain network interface cards (NICs) and network switches. With this enhancement, the new synce4l package is now available, which provides support for SyncE. As a result, Telco Radio Access Network (RAN) applications can now achieve more efficient communication due to more accurate time synchronization. Bugzilla:2143264 tuned rebased to version 2.20.0 The TuneD utility for optimizing the performance of applications and workloads has been updated to version 2.20.0. Notable changes and enhancements over version 2.19.0 include: An extension of API enables you to move devices between plug-in instances at runtime. The plugin_cpu module, which provides fine-tuning of CPU-related performance settings, introduces the following enhancements: The pm_qos_resume_latency_us feature enables you to limit the maximum time allowed for each CPU to transition from an idle state to an active state. TuneD adds support for the intel_pstate scaling driver, which provides scaling algorithms to tune the systems' power management based on different usage scenarios. The socket API to control TuneD through a Unix domain socket is now available as a Technology Preview. See Socket API for TuneD available as a Technology Preview for more information. Bugzilla:2133815 , Bugzilla:2113925 , Bugzilla:2118786 , Bugzilla:2095829 4.6. Security Libreswan rebased to 4.9 The libreswan packages have been upgraded to version 4.9. Notable changes over the version include: Support for the {left,right}pubkey= options to the addconn and whack utilities KDF self-tests Show host's authentication key ( showhostkey ): Support for ECDSA public keys New --pem option to print PEM encoded public key The Internet Key Exchange Protocol Version 2 (IKEv2): Extensible Authentication Protocol - Transport Layer Security (EAP-TLS) support EAP-only Authentication support The pluto IKE daemon: Support for maxbytes and maxpacket counters Bugzilla:2128669 OpenSSL rebased to 3.0.7 The OpenSSL packages have been rebased to version 3.0.7, which contains various bug fixes and enhancements. Most notably, the default provider now includes the RIPEMD160 hash function. Bugzilla:2129063 libssh now supports smart cards You can now use smart cards through Public-Key Cryptography Standard (PKCS) #11 Uniform Resource Identifier (URI). As a result, you can use smart cards with the libssh SSH library and with applications that use libssh . Bugzilla:2026449 libssh rebased to 0.10.4 The libssh library, which implements the SSH protocol for secure remote access and file transfer between machines, has been updated to version 0.10.4. New features: Support for OpenSSL 3.0 has been added. Support for smart cards has been added. Two new configuration options IdentityAgent and ModuliFile have been added. Other notable changes include: OpenSSL versions older than 1.0.1 are no longer supported By default, Digital Signature Algorithm (DSA) support has been disabled at build time. The SCP API has been deprecated. The pubkey and privatekey APIs have been deprecated. Bugzilla:2068475 SELinux user-space packages updated to 3.5 The SELinux user-space packages libselinux , libsepol , libsemanage , checkpolicy , mcstrans , and policycoreutils , which includes the sepolicy utility, have been updated to version 3.5. Notable enhancements and bug fixes include: The sepolicy utility: Added missing booleans to man pages Several Python and GTK updates Added a workaround to libselinux that reduces heap memory usage by the PCRE2 library The libsepol package: Rejects attributes in type AV rules for kernel policies No longer writes empty class definitions, which allows simpler round-trip tests Stricter policy validation The fixfiles script unmounts temporary bind mounts on the SIGINT signal Many code and spelling bugs fixed Removed dependency on the deprecated Python module distutils and the installation using PIP The semodule option --rebuild-if-modules-changed renamed to --refresh Translation updated for generated descriptions and improved handling of unsupported languages Fixed many static code analysis bugs, fuzzer problems, and compiler warnings Bugzilla:2145224 , Bugzilla:2145228 , Bugzilla:2145229 , Bugzilla:2145226 , Bugzilla:2145230 , Bugzilla:2145231 OpenSCAP rebased to 1.3.7 The OpenSCAP packages have been rebased to upstream version 1.3.7. This version provides various bug fixes and enhancements, most notably: Fixed error when processing OVAL filters ( RHBZ#2126882 ) OpenSCAP no longer emits invalid empty xmlfilecontent items if XPath does not match ( RHBZ#2139060 ) Prevented Failed to check available memory errors ( RHBZ#2111040 ) Bugzilla:2159286 SCAP Security Guide rebased to 0.1.66 The SCAP Security Guide (SSG) packages have been rebased to upstream version 0.1.66. This version provides various enhancements and bug fixes, most notably: New CIS RHEL9 profiles Deprecation of rule account_passwords_pam_faillock_audit in favor of accounts_passwords_pam_faillock_audit Bugzilla:2158405 New SCAP rule for idle session termination New SCAP rule logind_session_timeout has been added to the scap-security-guide package in ANSSI-BP-028 profiles for Enhanced and High levels. This rule uses a new feature of the systemd service manager and terminates idle user sessions after a certain time. This rule provides automatic configuration of a robust idle session termination mechanism which is required by multiple security policies. As a result, OpenSCAP can automatically check the security requirement related to terminating idle user sessions and, if necessary, remediate it. Bugzilla:2122325 scap-security-guide rules for Rsyslog log files are compatible with RainerScript logs Rules in scap-security-guide for checking and remediating ownership, group ownership, and permissions of Rsyslog log files are now also compatible with the RainerScript syntax. Modern systems already use the RainerScript syntax in Rsyslog configuration files and the respective rules were not able to recognize this syntax. As a result, scap-security-guide rules can now check and remediate ownership, group ownership, and permissions of Rsyslog log files in both available syntaxes. Bugzilla:2169414 Keylime rebased to 6.5.2 The keylime packages have been rebased to upstream version - keylime-6.5.2-5.el9. This version contains various enhancements and bug fixes, most notably the following: Addressed vulnerability CVE-2022-3500 The Keylime agent no longer fails IMA attestation when one scripts is executed quickly after another RHBZ#2138167 Fixed segmentation fault in the /usr/share/keylime/create_mb_refstate script RHBZ#2140670 Registrar no longer crashes during EK validation when the require_ek_cert option is enabled RHBZ#2142009 Bugzilla:2150830 Clevis accepts external tokens With the new -e option introduced to the Clevis automated encryption tool, you can provide an external token ID to avoid entering your password during cryptsetup . This feature makes the configuration process more automated and convenient, and is useful particularly for packages such as stratis that use Clevis. Bugzilla:2126533 Rsyslog TLS-encrypted logging now supports multiple CA files With the new NetstreamDriverCaExtraFiles directive, you can specify a list of additional certificate authority (CA) files for TLS-encrypted remote logging. Note that the new directive is available only for the ossl (OpenSSL) Rsyslog network stream driver. Bugzilla:2124849 Rsyslog privileges are limited The privileges of the Rsyslog log processing system are now limited to only the privileges explicitly required by Rsyslog. This minimizes security exposure in case of a potential error in input resources, for example, a networking plugin. As a result, Rsyslog has the same functionality but does not have unnecessary privileges. Bugzilla:2127404 SELinux policy allows Rsyslog to drop privileges at start Because the privileges of the Rsyslog log processing system are now more limited to minimize security exposure ( RHBZ#2127404 ), the SELinux policy has been updated to allow the rsyslog service to drop privileges at start. Bugzilla:2151841 Tang now uses systemd-sysusers The Tang network presence server now adds system users and groups through the systemd-sysusers service instead of shell scripts containing useradd commands. This simplifies checking of the system user list, and you can also override definitions of system users by providing sysuser.d files with higher priority. Bugzilla:2095474 opencryptoki rebased to 3.19.0 The opencryptoki package has been rebased to version 3.19.0, which provides many enhancements and bug fixes. Most notably, opencryptoki now supports the following features: IBM-specific Dilithium keys Dual-function cryptographic functions Cancelling active session-based operations by using the new C_SessionCancel function, as described in the PKCS #11 Cryptographic Token Interface Base Specification v3.0 Schnorr signatures through the CKM_IBM_ECDSA_OTHER mechanism Bitcoin key derivation through the CKM_IBM_BTC_DERIVE mechanism EP11 tokens in IBM z16 systems Bugzilla:2110314 SELinux now confines mptcpd and udftools With this update of the selinux-policy packages, SELinux confines the following services: mptcpd udftools Bugzilla:1972222 fapolicyd now provides filtering of the RPM database With the new configuration file /etc/fapolicyd/rpm-filter.conf , you can customize the list of RPM-database files that the fapolicyd software framework stores in the trust database. This way, you can block certain applications installed by RPM or allow an application denied by the default configuration filter. Jira:RHEL-192 GnuTLS can add and remove padding during decryption and encryption The implementation of certain protocols requires PKCS#7 padding during decryption and encryption. The gnutls_cipher_encrypt3 and gnutls_cipher_decrypt3 block cipher functions have been added to GnuTLS to transparently handle padding. As a result, you can now use these functions in combination with the GNUTLS_CIPHER_PADDING_PKCS7 flag to automatically add or remove padding if the length of the original plaintext is not a multiple of the block size. Bugzilla:2084161 NSS no longer support RSA keys shorter than 1023 bits The update of the Network Security Services (NSS) libraries changes the minimum key size for all RSA operations from 128 to 1023 bits. This means that NSS no longer perform the following functions: Generate RSA keys shorter than 1023 bits. Sign or verify RSA signatures with RSA keys shorter than 1023 bits. Encrypt or decrypt values with RSA key shorter than 1023 bits. Bugzilla:2091905 The Extended Master Secret TLS Extension is now enforced on FIPS-enabled systems With the release of the RHSA-2023:3722 advisory, the TLS Extended Master Secret (EMS) extension (RFC 7627) is mandatory for TLS 1.2 connections on FIPS-enabled RHEL 9 systems. This is in accordance with FIPS-140-3 requirements. TLS 1.3 is not affected. Legacy clients that do not support EMS or TLS 1.3 now cannot connect to FIPS servers running on RHEL 9. Similarly, RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS. This in practice means that these clients cannot connect to servers on RHEL 6, RHEL 7 and non-RHEL legacy operating systems. This is because the legacy 1.0.x versions of OpenSSL do not support EMS or TLS 1.3. In addition, connecting from a FIPS-enabled RHEL client to a hypervisor such as VMWare ESX now fails with a Provider routines::ems not enabled error if the hypervisor uses TLS 1.2 without EMS. To work around this problem, update the hypervisor to support TLS 1.3 or TLS 1.2 with the EMS extension. For VMWare vSphere, this means version 8.0 or later. For more information, see TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2 . Bugzilla:2188046 , Bugzilla:2218721 4.7. Networking NetworkManager rebased to version 1.42.2 The NetworkManager packages have been upgraded to upstream version 1.42.2, which provides a number of enhancements and bug fixes over the version: Ethernet bonds support source load balancing. NetworkManager can manage connections on the loopback device. Support for IPv4 equal-cost multi-path (ECMP) routes was added. Support for 802.1ad tagging in Virtual Local Area Networks (VLANs) connections was added. The nmtui application supports Wi-Fi WPA-Enterprise, Ethernet with 802.1X authentication, and MACsec connection profiles. NetworkManager rejects DHCPv6 leases if all addresses fail IPv6 duplicate address detection (DAD). For further information about notable changes, read the upstream release notes . Bugzilla:2134897 Introduction of the weight property in ECMP routing with NetworkManager With this update, RHEL 9 supports a new property weight when defining IPv4 Equal-Cost Multi-Path (ECMP) routes. You can configure multipath routing using NetworkManager to load-balance and stabilize network traffic. This allows for multiple paths to be used for data transmission between two nodes, which improves the network efficiency and provides redundancy in the event of a link failure. Conditions for using the weight property include: The valid values are 1-256. Define multiple -hop routes as single-hop routes with the weight property. If you do not set weight , NetworkManager cannot merge the routes into an ECMP route. Bugzilla:2081302 NetworkManager update brings improved flexibility for DNS configuration across multiple networks With this update, you can use the existing [global-dns] section in the /etc/Networkmanager/NetworkManager.conf file to configure DNS options without specifying the nameserver value in the [global-dns-domain-*] section. This enables you to configure DNS options in the /etc/resolv.conf file while still relying on the DNS servers provided by the network connection for actual DNS resolution. As a result, the feature makes it easier and more flexible to manage your DNS settings when connecting to different networks with different DNS servers. Especially when you use the /etc/resolv.conf file to configure DNS options. Bugzilla:2019306 NetworkManager now supports a new vlan.protocol property With this update, the vlan interface type now accepts a new protocol property. The property type is string. The accepted values are either 802.1Q (default), or 802.1ad . The new property specifies which VLAN protocol controls the tag identifier for encapsulation. Bugzilla:2128809 NetworkManager now allows VLAN configuration over unmanaged interface With this enhancement, you can use an unmanaged networking interface as a base interface when configuring virtual LAN (VLAN) with NetworkManager. As a result, the VLAN base interface remains intact unless changed explicitly through the nmcli device set enp1s0 managed true command or other API of NetworkManager. Bugzilla:2110307 Configuring Multipath TCP using NetworkManager is now fully supported With this update, the NetworkManager utility provides you with the Multipath TCP (MPTCP) functionality. You can use nmcli commands to control MPTCP and make its settings persistent. For more information, see: Understanding Multipath TCP: High availability for endpoints and the networking highway of the future RFC 8684: TCP Extensions for Multipath Operation with Multiple Addresses Permanently configuring multiple paths for MPTCP applications Bugzilla:2029636 The NetworkManager utility now supports activating connections on the loopback interface Administrators can manage the loopback interface to: Add extra IP addresses to the loopback interface Define DNS configuration Define a special route, which does not bind to an interface Define a route rule, which is not interface-related Change Maximum Transmission Unit (MTU) size of the loopback interface Bugzilla:2073512 The balance-slb bonding mode is now supported The new balance-slb bonding mode Source load balancing requires no switch configuration. The balance-slb divides traffic on the source ethernet address using xmit_hash_policy = vlan+srcmac , and NetworkManager adds necessary nftables rules for traffic filtering. As a result, you can now create bond profiles with the balance-slb option enabled by using NetworkManager. Bugzilla:2128216 firewalld rebased to version 1.2 The firewalld package has been upgraded to version 1.2, which provides multiple enhancements. Notable changes include: Support for new services (for example netdata, IPFS) Fail-safe mode to ensure that the system remains protected and that network communication is not disrupted if the firewalld service encounters an error during its startup Tab-completion in command-line (CLI) for some of the firewalld policy commands Bugzilla:2125371 The firewalld now supports the startup failsafe mechanism With this enhancement, firewalld will fall back to failsafe defaults in case of a startup failure. This feature protects the host in case of invalid configurations or other startup issues. As a result, even if the user configuration is invalid, hosts running firewalld are now startup failsafe. Bugzilla:2077512 conntrack-tools rebased to version 1.4.7 The conntrack-tools package has been upgraded to version 1.4.7, which provides multiple bug fixes and enhancements. Notable changes include: Adds the IPS_HW_OFFLOAD flag, which specifies offloading of a conntrack entry to the hardware Adds clash_resolve and chaintoolong statistical counters Supports filtering events by IP address family Accepts yes or no as synonyms to on or off in the conntrackd.conf file Supports user space helper auto-loading upon daemon startup. Users do not have to manually run the nfct add helper commands Removes the -o userspace command option and always tags user space triggered events Logs external inject problems as warning only Ignores conntrack ID when looking up cache entries to allow for stuck old ones to be replaced Fixes broken parsing of IPv6 M-SEARCH requests in the ssdp cthelper module Eliminates the need for lazy binding technique in the nfct library Sanitizes protocol value parsing, catch invalid values Bugzilla:2132398 The nmstate API now supports IPv6 link-local addresses as DNS servers With this enhancement, you can use the nmstate API to set IPv6 link-local addresses as DNS servers. Use the <link-local_address>%<interface> format, for example: Bugzilla:2095207 The nmstate API now supports MPTCP flags This update enhances the nmstate API with support for MultiPath TCP (MPTCP) flags. As a result, you can use nmstate to set MPTCP address flags on interfaces with static or dynamic IP addresses. Bugzilla:2120473 The min-mtu and max-mtu properties added to MTU on all interfaces Previously, an exception message was not clear enough to understand the supported MTU ranges. This update introduces the min-mtu and max-mtu properties to all interfaces. As a result, nmstate will indicate the supported MTU range when the desired MTU is out of range. Bugzilla:2044150 NetworkManager now allows VLAN configuration over unmanaged interface With this enhancement, you can use an unmanaged networking interface as a base interface when configuring virtual LAN (VLAN) with NetworkManager. As a result, the VLAN base interface remains intact unless changed explicitly through the nmcli device set enp1s0 managed true command or other API of NetworkManager. Bugzilla:2058292 The balance-slb bonding mode is now supported The new balance-slb bonding mode Source load balancing requires no switch configuration. The balance-slb divides traffic on the source Ethernet address using xmit_hash_policy = vlan+srcmac , and NetworkManager adds necessary nftables rules for traffic filtering. As a result, you can now create bond profiles with the balance-slb option enabled by using NetworkManager. Bugzilla:2130240 A new weight property in Nmstate This update introduces the weight property in the Nmstate API and tooling suite. You can use weight to specify the relative weight of each path in the Equal Cost Multi-Path routes (ECMP) group. The weight is a number between 1 and 256. As a result, weight property in Nmstate provides greater flexibility and control over traffic distribution in an ECMP group. Bugzilla:2162401 xdp-tools rebased to version 1.3.1 The xdp-tools packages have been upgraded to upstream version 1.3.1, which provides a number of enhancements and bug fixes over the version: The following utilities have been added: xdp-bench : Performs XDP benchmarks on the receive side. xdp-monitor : Monitors XDP errors and statistics using kernel trace points. xdp-trafficgen : Generates and sends traffic through the XDP driver hook. The following features have been added to the libxdp library: The xdp_multiprog__xdp_frags_support() , xdp_program__set_xdp_frags_support() , and xdp_program__xdp_frags_support() functions have been added to support loading programs with XDP frags support, a feature that is also known as multibuffer XDP . The library performs proper reference counting when attaching programs to AF_XDP sockets. As a result, the application no longer has to manually detach XDP programs when using sockets. The libxdp library detaches the program now automatically when the program is no longer used. The following functions have been added to the library: xdp_program__create() for creating xdp_program objects xdp_program__clone() for cloning an xdp_program reference xdp_program__test_run() for running XDP programs through the BPF_PROG_TEST_RUN kernel API When the LIBXDP_BPFFS_AUTOMOUNT environment variable is set, the libxdp library now supports automatically mounting of a bpffs virtual file system if none is found. A subset of the library features can now also function when no bpffs is mounted. Note that this version also changes the version number of the XDP dispatcher program that is being loaded on the network devices. This means that you can not use a and a new version of libxdp and xdp-tools at the same time. The libxdp 1.3 library will display old versions of the dispatcher, but not automatically upgrade them. Additionally, after loading a program with libxdp 1.3, older versions will not interoperate with the newer one. Bugzilla:2160066 iproute rebased to version 6.1.0 The iproute package has been upgraded to version 6.1.0, which provides multiple bug fixes and enhancements. Notable changes include: Supports reading the vdpa device statistics Illustration of statistics reading for the virtqueue data structure at index 1: Illustration of statistics reading for the virtqueue data structure at index 16: Updates the corresponding manual pages Bugzilla:2155604 The kernel now logs the listening address in SYN flood messages This enhancement adds the listening IP address to SYN flood messages: As a result, if many processes are bound to the same port on different IP addresses, administrators can now clearly identify the affected socket. Bugzilla:2143850 Introduction of new nmstate attributes for the VLAN interface With this update of the nmstate framework, the following VLAN attributes were introduced: registration-protocol : VLAN Registration Protocol. The valid values are gvrp (GARP VLAN Registration Protocol), mvrp (Multiple VLAN Registration Protocol), and none . reorder-headers : reordering of output packet headers. The valid values are true and false . loose-binding : loose binding of the interface to the operating state of its primary device. The valid values are true and false . Your YAML configuration file can look similar to the following example: Jira:RHEL-19142 4.8. Kernel Kernel version in RHEL 9.2 Red Hat Enterprise Linux 9.2 is distributed with the kernel version 5.14.0-284.11.1. Bugzilla:2177782 The 64k page size kernel is now available In addition to the RHEL 9 for ARM kernel which supports 4k pages, Red Hat now offers an optional kernel package that supports 64k pages: kernel-64k . The 64k page size kernel is a useful option for large datasets on ARM platforms. It enables better performance for some types of memory- and CPU-intensive operations. You must choose page size on 64-bit ARM architecture systems at the time of installation. You can install kernel-64k only by Kickstart by adding the kernel-64k package to the package list in the Kickstart file. For more information on installing kernel-64k , see Automatically installing RHEL . Bugzilla:2153073 virtiofs support for kexec-tools enabled This enhancement adds the virtiofs feature for kexec-tools by introducing the new option, virtiofs myfs , where myfs is a variable tag name to set in the qemu command line, for example, -device vhost-user-fs-pci,tag=myfs The virtiofs file system implements a driver that allows a guest to mount a directory that has been exported on the host. By using this enhancement, you can save the virtual machine's vmcore dump file to: A virtiofs shared directory. The sub-directory, such as /var/crash , when the root file system is a virtiofs shared directory. A different virtiofs shared directory, when the virtual machine's root file system is a virtiofs shared directory. Bugzilla:2085347 The kexec-tools package now adds improvements on remote kdump targets With this enhancement, the kexec-tools package adds significant bug fixes and enhancements. The most notable changes include: Optimized memory consumption for kdump by enabling only the required network interfaces. Improved network efficiency for kdump in events of connection timeout failures. The default wait time for a network to establish is 10 minutes maximum. This removes the need to pass dracut parameters, such as rd.net.timeout.carrier or rd.net.timeout.dhcp as a workaround to identify a carrier. Bugzilla:2076416 BPF rebased to version 6.0 The Berkeley Packet Filter (BPF) facility has been rebased to Linux kernel version 6.0 with multiple enhancements. This update enables all the BPF features that depend on the BPF Type Format (BTF) for kernel modules. Such features include the usage of BPF trampolines for tracing, the availability of the Compile Once - Run Everywhere (CO-RE) mechanism, and several networking-related features. Furthermore, the kernel modules now contain debugging information, which means that you no longer need to install debuginfo packages to inspect the running modules. For more information on the complete list of BPF features available in the running kernel, use the bpftool feature command. Jira:RHELPLAN-133650 The rtla meta-tool adds the osnoise and timerlat tracers for improved tracing capabilities The Real-Time Linux Analysis ( rtla ) is a meta-tool that includes a set of commands that analyze the real-time properties of Linux. rtla leverages kernel tracing capabilities to provide precise information about the properties and root causes of unexpected system results. rtla currently adds support for osnoise and timerlat tracer commands: The osnoise tracer reports information about operating system noise. The timerlat tracer periodically prints the timer latency at the timer IRQ handler and the thread handler. Note that to use the timerlat feature of rtla , you must disable admission control by using the sysctl -w kernel.sched_rt_runtime_us=-1 script. Bugzilla:2075216 The argparse module of Tuna now supports configuring CPU sockets With this enhancement, you can specify a specific CPU socket when you have multiple CPU sockets. You can view the help usage by using the -h on a subcommand, for example, tuna show_threads -h . To configure a specific CPU socket, specify the -S option with each tuna command where you need to use CPU sockets: For example, use tuna show_threads -S 2,3 to view the threads or tuna show_irqs -S 2,3 to view attached interrupt requests (IRQs). As a result, this enhancement facilitates CPU usage based on CPU sockets without the need to specify each CPU individually. Bugzilla:2122781 The output format for cgroups and irqs in Tuna is improved to provide better readability With this enhancement, the tuna show_threads command output for the cgroup utility is now structured based on the terminal size. You can also configure additional spacing to the cgroups output by adding the new -z or --spaced option to the show_threads command. As a result, the cgroups output now has an improved readable format that is adaptable to your terminal size. Bugzilla:2121517 A new command line interface has been added to the tuna tool in real-time This enhancement adds a new command line interface to the tuna tool, which is based on the argparse parsing module. With this update, you can now perform the following tasks: Change the attributes of the application and kernel threads. Operate on interrupt requests (IRQs) by name or number. Operate on tasks or threads by using the process identifier. Specify CPUs and sets of CPUs with the CPU or the socket number. By using the tuna -h command, you can print the command line arguments and their corresponding options. For each command, there are optional arguments, which you can view with the tuna <command> -h command. As a result, tuna now provides an interface with a more standardized menu of commands and options that is easier to use and maintain than the command line interface. Bugzilla:2062865 The rteval command output now includes the program loads and measurement threads information The rteval command now displays a report summary with the number of program loads, measurement threads, and the corresponding CPU that ran these threads. This information helps to evaluate the performance of a real-time kernel under load on specific hardware platforms. The rteval report is written to an XML file along with the boot log for the system and saved to the rteval-<date>-N-tar.bz2 compressed file. The date specifies the report generation date and N is the counter for the Nth run. To generate an rteval report, enter the following command: Bugzilla:2081325 The -W and --bucket-width options has been added to the oslat program to measure latency With this enhancement, you can specify a latency range for a single bucket at nanoseconds accuracy. Widths that are not multiples of 1000 nanoseconds indicate nanosecond precision. By using the new options, -W or --bucket-width , you can modify the latency interval between buckets to measure latency within sub-microseconds delay time. For example to set a latency bucket width of 100 nanoseconds for 32 buckets over a duration of 10 seconds to run on CPU range of 1-4 and omit zero bucket size, run the following command: Note that before using the option, you must determine what level of precision is significant in relation to the error measurement. Bugzilla:2041637 The NVMe/FC transport protocol enabled as the kdump storage target The kdump mechanism now provides the support for Nonvolatile Memory Express (NVMe) over Fibre Channel (NVMe/FC) protocol as the dump target. With this update, you can configure kdump to save kernel crash dump files on NVMe/FC storage targets. As a result, kdump can capture and save the vmcore file on NVMe/FC in the event of a kernel crash without timeout or reconnect errors. For more information on NVMe/FC configuration, see Managing storage devices Bugzilla:2080110 The crash-utility tool has been rebased to version 8.0.2 The crash-utility , which analyzes an active system state or after a kernel crash, has been rebased to version 8.0.2. The notable change includes adding support for multiqueue(blk-mq) devices. By using the dev -d or dev -D command, you can display the disk I/O statistics for multiqueue(blk-mq) devices. Bugzilla:2119685 openssl-ibmca rebased to version 2.3.1 The dynamic OpenSSL engine and provider for IBMCA on 64-bit IBM Z architecture have been rebased to upstream version 2.3.1. Users of RHEL 9 are recommended to use the OpenSSL provider to ensure compatibility with future updates of OpenSSL. The engine functionality has been deprecated in OpenSSL version 3. Bugzilla:2110378 Secure Execution guest dump encryption with customer keys This new feature allows hypervisor-initiated dumps for Secure Execution guests to collect kernel crash information from KVM in scenarios in which the kdump utility does not work. Note that hypervisor-initiated dumps for Secure Execution is designed for the IBM Z Series z16 and LinuxONE Emperor 4 hardware. Bugzilla:2044204 The TSN protocol for real-time has been enabled on the ADL-S platform With this enhancement, the IEEE Time Sensitive Networking (TSN) specification enables time synchronization and deterministic processing of real-time workloads over the network on Intel Alder Lake S (ADL-S) platform. It supports the following network devices: A discrete 2.5GbE MAC-PHY combo with TSN support: Intel(R) i225/i226 An integrated 2.5GbE MAC in the SOC with 3rd party PHY chips from Marvell, Maxlinear and TI covering the 1GbE and 2.5Gbe speed, is available on select skus and SOCs. With the TSN protocol, you can manage deterministic applications scheduling, preemption, and accurate time synchronization type workloads in embedded implementations. These implementations need dedicated, specialized, and proprietary networks, while workloads run on standard Ethernet, Wi-Fi, and 5G networks. As a result, TSN provides improved capabilities for: Hardware: Intel based systems used for implementing real-time workloads in IoT Deterministic and time sensitive applications Bugzilla:2100606 The Intel ice driver rebased to version 6.0.0 The Intel ice driver has been upgraded to upstream version 6.0.0, which provides a number of enhancements and bug fixes over versions. The notable enhancements include: Point-to-Point Protocol over Ethernet ( PPPoE ) protocol hardware offload Inter-Integrated Circuit ( I2C ) protocol write command VLAN Tag Protocol Identifier ( TPID ) filters in the Ethernet switch device driver model ( switchdev ) Double VLAN tagging in switchdev Bugzilla:2104468 Option to write data for gnss module is now available This update provides the option of writing data to the gnss receiver. Previously, gnss was not fully configurable. With this enhancement, all gnss functions are now available. Bugzilla:2111048 Hosting Secure Boot certificates for IBM zSystems Starting with IBM z16 A02/AGZ and LinuxONE Rockhopper 4 LA2/AGL, you can manage certificates used to validate Linux kernels when starting the system with Secure Boot enabled on the Hardware Management Console (HMC). Notably: You can load certificates in a system certificate store using the HMC in DPM and classic mode from an FTP server that can be accessed by the HMC. It is also possible to load certificates from a USB device attached to the HMC. You can associate certificates stored in the certificate store with an LPAR partition. Multiple certificates can be associated with a partition and a certificate can be associated with multiple partitions. You can de-associate certificates in the certificate store from a partition by using HMC interfaces. You can remove certificates from the certificate store. You can associate up to 20 certificates with a partition. The built-in firmware certificates are still available. In particular, as soon as you use the user-managed certificate store, the built-in certificates will no longer be available. Certificate files loaded into the certificate store must meet the following requirements: They have the PEM- or DER-encoded X.509v3 format and one of the following filename extensions: .pem , .cer , .crt , or .der . They are not expired. The key usage attribute must be Digital Signature . The extended key usage attribute must contain Code Signing . A firmware interface allows a Linux kernel running in a logical partition to load the certificates associated with this partition. Linux on IBM Z stores these certificates in the .platform keyring, allowing the Linux kernel to verify kexec kernels and third party kernel modules to be verified using certificates associated with that partition. It is the responsibility of the operator to only upload verified certificates and to remove certificates that have been revoked. Note The Red Hat Secureboot 302 certificate that you need to load into the HMC is available at Product Signing Keys . Bugzilla:2190123 zipl support for Secure Boot IPL and dump on 64-bit IBM Z With this update, the zipl utility supports List-Directed IPL and List-Directed dump from Extended Count Key Data (ECKD) Direct Access Storage Devices (DASD) on the 64-bit IBM Z architecture. As a result, Secure Boot for RHEL on IBM Z also works with the ECKD type of DASDs. Bugzilla:2044200 rtla rebased to version 6.6 of the upstream kernel source code The rtla utility has been upgraded to the latest upstream version, which provides multiple bug fixes and enhancements. Notable changes include: Added the -C option to specify additional control groups for rtla threads to run in, apart from the main rtla thread. Added the --house-keeping option to place rtla threads on a housekeeping CPU and to put measurement threads on different CPUs. Added support to the timerlat tracer so that you can run timerlat hist and timerlat top threads in user space. Jira:RHEL-18359 4.9. File systems and storage nvme-cli rebased to version 2.2.1 The nvme-cli packages have been upgraded to version 2.2.1, which provide multiple bug fixes and enhancements. Notable changes include: Added the new nvme show-topology command, which displays the topology of all NVMe subsystems. Dropped the libuuid dependency. The uint128 data fields are displayed correctly. Updated the libnvme dependency to version 1.2. Bugzilla:2139753 libnvme rebased to version 1.2 The libnvme packages have been upgraded to version 1.2, which provide multiple bug fixes and enhancements. The most notable change is a dropped dependency of the libuuid library. Bugzilla:2139752 Stratis enforces consistent block size in pools Stratis now enforces a consistent block size in pools to address potential edge case problems that can occur when mixed block size devices exist within a pool. With this enhancement, users can no longer create a pool or add new devices that have a different block size from the existing devices in the pool. As a result, there is a reduced risk of pool failure. Bugzilla:2039957 Support for existing disk growth within the Stratis pool Previously, when a user added new disks to the RAID array, the size of the RAID array would generally increase. However, in all cases, Stratis ignored the increase in size and continued to use only the space that was available on the RAID array when it was first added to the pool. As a result, Stratis was unable to identify the new device, and users could not increase the size of the pool. With this enhancement, Stratis now identifies any pool device members that have expanded in size. As a result, users can now issue a command to expand the pool based on their requirements. Stratis now supports the growth of existing disks within its pool, in addition to the existing feature of growing the pool by adding new disks. Bugzilla:2039955 Improved functionality of the lvreduce command With this enhancement, when the logical volume (LV) is active, the lvreduce command checks if reducing the LV size would damage any file system present on it. If a file system on the LV requires reduction, and the lvreduce resizefs option has not been enabled, then the LV will not be reduced. Additionally, new options are now available to control the handling of file systems while reducing an LV. These options provide users with greater flexibility and control when using the lvreduce command. Bugzilla:1878893 Direct I/O alignment information for statx was added This update introduces a new mask value, "STATX_DIOALIGN" , to the statx(2) call. When this value is set in the stx_mask field, it requests stx_dio_mem_align and stx_dio_offset_align values, which indicate the required alignment (in bytes) for user memory buffers and file offsets and I/O segment lengths for direct I/O (O_DIRECT) on this file, respectively. If direct I/O is not supported on the file, both values will be 0. This interface is now implemented for block devices as well as for files on the xfs and ext4 filesystems in RHEL9. Bugzilla:2150284 NFSv4.1 session trunking discovery With this update, the client can use multiple connections to the same server and session, resulting in faster data transfer. When an NFS client mounts a multi-homed NFS server with different IP addresses, only one connection is used by default, ignoring the rest. To improve performance, this update adds support for the trunkdiscovery and max_connect mount options, which enable the client to test each connection and associate multiple connections with the same NFSv4.1+ server and session. Bugzilla:2066372 NFS IO sizes can now be set as a multiples of PAGE_SIZE for TCP and RDMA This update allows users to set NFS IO sizes as a multiples of PAGE_SIZE for TCP and RDMA connections. This offers greater flexibility in optimizing NFS performance for some architectures. Bugzilla:2107347 nfsrahead has been added to RHEL 9 With the introduction of the nfsrahead tool, you can use it to modify the readahead value for NFS mounts, and thus affect the NFS read performance. Bugzilla:2143747 4.10. High availability and clusters New enable-authfile Booth configuration option When you create a Booth configuration to use the Booth ticket manager in a cluster configuration, the pcs booth setup command now enables the new enable-authfile Booth configuration option by default. You can enable this option on an existing cluster with the pcs booth enable-authfile command. Additionally, the pcs status and pcs booth status commands now display warnings when they detect a possible enable-authfile misconfiguration. Bugzilla:2116295 pcs can now run the validate-all action of resource and stonith agents When creating or updating a resource or a STONITH device, you can now specify the --agent-validation option. With this option, pcs uses an agent's validate-all action, when it is available, in addition to the validation done by pcs based on the agent's metadata. Bugzilla:2112270 , Bugzilla:2159454 4.11. Dynamic programming languages, web and database servers Python 3.11 available in RHEL 9 RHEL 9.2 introduces Python 3.11, provided by the new package python3.11 and a suite of packages built for it, as well as the ubi9/python-311 container image. Notable enhancements compared to the previously released Python 3.9 include: Significantly improved performance. Structural Pattern Matching using the new match keyword (similar to switch in other languages). Improved error messages, for example, indicating unclosed parentheses or brackets. Exact line numbers for debugging and other use cases. Support for defining context managers across multiple lines by enclosing the definitions in parentheses. Various new features related to type hints and the typing module, such as the new X | Y type union operator, variadic generics, and the new Self type. Precise error locations in tracebacks pointing to the expression that caused the error. A new tomllib standard library module which supports parsing TOML. An ability to raise and handle multiple unrelated exceptions simultaneously using Exception Groups and the new except* syntax. Python 3.11 and packages built for it can be installed in parallel with Python 3.9 on the same system. To install packages from the python3.11 stack, use, for example: To run the interpreter, use, for example: See Installing and using Python for more information. Note that Python 3.11 will have a shorter life cycle than Python 3.9, which is the default Python implementation in RHEL 9; see Red Hat Enterprise Linux Application Streams Life Cycle . Bugzilla:2127923 nodejs:18 rebased to version 18.14 with npm rebased to version 9 The updated Node.js 18.14 includes a SemVer major upgrade of npm from version 8 to version 9. This update was necessary due to maintenance reasons and may require you to adjust your npm configuration. Notably, auth-related settings that are not scoped to a specific registry are no longer supported. This change was made for security reasons. If you used unscoped authentication configurations, the supplied token was sent to every registry listed in the .npmrc file. If you use unscoped authentication tokens, generate and supply registry-scoped tokens in your .npmrc file. If you have configuration lines using _auth , such as //registry.npmjs.org/:_auth in your .npmrc files, replace them with //registry.npmjs.org/:_authToken=USD{NPM_TOKEN} and supply the scoped token that you generated. For a complete list of changes, see the upstream changelog . Bugzilla:2178088 git rebased to version 2.39.1 The Git version control system has been updated to version 2.39.1, which provides bug fixes, enhancements, and performance improvements over the previously released version 2.31. Notable enhancements include: The git log command now supports a format placeholder for the git describe output: git log --format=%(describe) The git commit command now supports the --fixup<commit> option which enables you to fix the content of the commit without changing the log message. With this update, you can also use: The --fixup=amend:<commit> option to change both the message and the content. The --fixup=reword:<commit> option to update only the commit message. You can use the new --reject-shallow option with the git clone command to disable cloning from a shallow repository. The git branch command now supports the --recurse-submodules option. You can now use the git merge-tree command to: Test if two branches can merge. Compute a tree that would result in the merge commit if the branches were merged. You can use the new safe.bareRepository configuration variable to filter out bare repositories. Bugzilla:2139379 git-lfs rebased to version 3.2.0 The Git Large File Storage (LFS) extension has been updated to version 3.2.0, which provides bug fixes, enhancements, and performance improvements over the previously released version 2.13. Notable changes include: Git LFS introduces a pure SSH-based transport protocol. Git LFS now provides a merge driver. The git lfs fsck utility now additionally checks that pointers are canonical and that expected LFS files have the correct format. Support for the NT LAN Manager (NTLM) authentication protocol has been removed. Use Kerberos or Basic authentication instead. Bugzilla:2139383 A new module stream: nginx:1.22 The nginx 1.22 web and proxy server is now available as the nginx:1.22 module stream. This update provides a number of bug fixes, security fixes, new features, and enhancements over the previously released version 1.20. New features: nginx now supports: OpenSSL 3.0 and the SSL_sendfile() function when using OpenSSL 3.0. The PCRE2 library. POP3 and IMAP pipelining in the mail proxy module. nginx now passes the Auth-SSL-Protocol and Auth-SSL-Cipher header lines to the mail proxy authentication server. Enhanced directives: Multiple new directives are now available, such as ssl_conf_command and ssl_reject_handshake . The proxy_cookie_flags directive now supports variables. nginx now supports variables in the following directives: proxy_ssl_certificate , proxy_ssl_certificate_key , grpc_ssl_certificate , grpc_ssl_certificate_key , uwsgi_ssl_certificate , and uwsgi_ssl_certificate_key . The listen directive in the stream module now supports a new fastopen parameter, which enables TCP Fast Open mode for listening sockets. A new max_errors directive has been added to the mail proxy module. Other changes: nginx now always returns an error if: The CONNECT method is used. Both Content-Length and Transfer-Encoding headers are specified in the request. The request header name contains spaces or control characters. The Host request header line contains spaces or control characters. nginx now blocks all HTTP/1.0 requests that include the Transfer-Encoding header. nginx now establishes HTTP/2 connections using the Application Layer Protocol Negotiation (ALPN) and no longer supports the Protocol Negotiation (NPN) protocol. To install the nginx:1.22 stream, use: For more information, see Setting up and configuring NGINX . For information about the length of support for the nginx module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Bugzilla:2096174 mod_security rebased to version 2.9.6 The mod_security module for the Apache HTTP Server has been updated to version 2.9.6, which provides new features, bug fixes, and security fixes over the previously available version 2.9.3. Notable enhancements include: Adjusted parser activation rules in the modsecurity.conf-recommended file. Enhancements to the way mod_security parses HTTP multipart requests. Added a new MULTIPART_PART_HEADERS collection. Added microsec timestamp resolution to the formatted log timestamp. Added missing Geo Countries. Bugzilla:2143211 New packages: tomcat RHEL 9.2 introduces the Apache Tomcat server version 9. Tomcat is the servlet container that is used in the official Reference Implementation for the Java Servlet and JavaServer Pages technologies. The Java Servlet and JavaServer Pages specifications are developed by Sun under the Java Community Process. Tomcat is developed in an open and participatory environment and released under the Apache Software License version 2.0. Bugzilla:2160511 A new module stream: postgresql:15 RHEL 9.2 introduces PostgreSQL 15 as the postgresql:15 module stream. PostgreSQL 15 provides a number of new features and enhancements over version 13. Notable changes include: You can now access PostgreSQL JSON data by using subscripts. Example query: PostgreSQL now supports multirange data types and extends the range_agg function to aggregate multirange data types. PostgreSQL improves monitoring and observability: You can now track progress of the COPY commands and Write-ahead-log (WAL) activity. PostgreSQL now provides statistics on replication slots. By enabling the compute_query_id parameter, you can now uniquely track a query through several PostgreSQL features, including pg_stat_activity or EXPLAIN VERBOSE . PostgreSQL improves support for query parallelism by the following: Improved performance of parallel sequential scans. The ability of SQL Procedural Language ( PL/pgSQL ) to execute parallel queries when using the RETURN QUERY command. Enabled parallelism in the REFRESH MATERIALIZED VIEW command. PostgreSQL now includes the SQL standard MERGE command. You can use MERGE to write conditional SQL statements that can include the INSERT , UPDATE , and DELETE actions in a single statement. PostgreSQL provides the following new functions for using regular expressions to inspect strings: regexp_count() , regexp_instr() , regexp_like() , and regexp_substr() . PostgreSQL adds the security_invoker parameter, which you can use to query data with the permissions of the view caller, not the view creator. This helps you ensure that view callers have the correct permissions for working with the underlying data. PostgreSQL improves performance, namely in its archiving and backup facilities. PostgreSQL adds support for the LZ4 and Zstandard ( zstd ) lossless compression algorithms. PostgreSQL improves its in-memory and on-disk sorting algorithms. The updated postgresql.service systemd unit file now ensures that the postgresql service is started after the network is up. The following changes are backwards incompatible: The default permissions of the public schema have been modified. Newly created users need to grant permission explicitly by using the GRANT ALL ON SCHEMA public TO myuser; command. For example: The libpq PQsendQuery() function is no longer supported in pipeline mode. Modify affected applications to use the PQsendQueryParams() function instead. See also Using PostgreSQL . To install the postgresql:15 stream, use: If you want to upgrade from an earlier postgresql stream within RHEL 9, migrate your PostgreSQL data as described in Migrating to a RHEL 9 version of PostgreSQL . For information about the length of support for the postgresql module streams, see the Red Hat Enterprise Linux Application Streams Life Cycle . Bugzilla:2128410 4.12. Compilers and development tools openblas rebased to version 0.3.21 The OpenBLAS library has been updated to version 0.3.21. This update includes performance optimalization patches for the IBM POWER10 platform. Bugzilla:2112099 A new module stream: swig:4.1 RHEL 9.2 introduces the Simplified Wrapper and Interface Generator (SWIG) version 4.1 as the swig:4.1 module stream available in the CodeReady Linux Builder (CRB) repository. Note that packages included in the CodeReady Linux Builder repository are unsupported. Compared to SWIG 4.0 released in RHEL 9.0, SWIG 4.1 : Adds support for Node.js versions 12 to 18 and removes support for Node.js versions earlier than 6. Adds support for PHP 8 . Handles PHP wrapping entirely through PHP C API and no longer generates a .php wrapper by default. Supports only Perl 5.8.0 and later versions. Adds support for Python versions 3.9 to 3.11. Supports only Python 3.3 and later Python 3 versions, and Python 2.7 . Provides fixes for various memory leaks in Python -generated code. Improves support for the C99, C++11, C++14, and C++17 standards and starts implementing the C++20 standard. Adds support for the C++ std::unique_ptr pointer class. Includes several minor improvements in C++ template handling. Fixes C++ declaration usage in various cases. To install the swig:4.1 module stream: Enable the CodeReady Linux Builder (CRB) repository . Install the module stream: Bugzilla:2139101 New package: jmc in the CRB repository RHEL 9.2 introduces the JDK Mission Control (JMC) profiler for HotSpot JVMs version 8.2.0, available as the jmc package in the CodeReady Linux Builder (CRB) repository for the AMD and Intel 64-bit architectures. To install JMC, you must first enable the CodeReady Linux Builder (CRB) repository . Note that packages included in the CRB repository are unsupported. Bugzilla:2122401 OpenJDK service attributes now available in FIPS mode Previously, cryptographic services and algorithms available for OpenJDK in FIPS mode were too strictly filtered and resulted in unavailable service attributes. With this enhancement, these service attributes are now available in FIPS mode. Bugzilla:2186803 Performance Co-Pilot rebased to version 6.0 Performance Co-Pilot ( PCP ) has been updated to version 6.0. Notable improvements include: Version 3 PCP archive support: This includes support for instance domain change-deltas, Y2038-safe timestamps, nanosecond-precision timestamps, arbitrary timezones support, and 64-bit file offsets used throughout for larger (beyond 2GB) individual volumes. This feature is currently opt-in via the PCP_ARCHIVE_VERSION setting in the /etc/pcp.conf file. Version 2 archives remain the default. Only OpenSSL is used throughout PCP. Mozilla NSS/NSPR use has been dropped: This impacts libpcp , PMAPI clients and PMCD use of encryption. These elements are now configured and used consistently with pmproxy HTTPS support and redis-server , which were both already using OpenSSL. New nanosecond precision timestamp PMAPI calls for PCP library interfaces that make use of timestamps. These are all optional, and full backward compatibility is preserved for existing tools. The following tools and services have been updated: pcp2elasticsearch Implemented authentication support. pcp-dstat Implemented support for the top-alike plugins. pcp-htop Updated to the latest stable upstream release. pmseries Added sum , avg , stdev , nth_percentile , max_inst , max_sample , min_inst and min_sample functions. pmdabpf Added CO-RE (Compile Once - Run Everywhere) modules and support for AMD64, Intel 64-bit, 64-bit ARM, and IBM Power Systems. pmdabpftrace Moved example autostart scripts to the /usr/share directory. pmdadenki Added support for multiple active batteries. pmdalinux Updates for the latest /proc/net/netstat changes. pmdaopenvswitch Added additional interface and coverage statistics. pmproxy Request parameters can now be sent in the request body. pmieconf Added several pmie rules for Open vSwitch metrics. pmlogger_farm Added a default configuration file for farm loggers. pmlogger_daily_report Some major efficiency improvements. Bugzilla:2117074 grafana rebased to version 9.0.9 The grafana package has been rebased to version 9.0.9. Notable changes include: The time series panel is now the default visualization option, replacing the graph panel New heatmap panel New Prometheus and Loki query builder Updated Grafana Alerting Multiple UI/UX and performance improvements License changed from Apache 2.0 to GNU Affero General Public License (AGPL) The following are offered as opt-in experimental features: New bar chart panel New state timeline panel New status history panel New histogram panel For more information, see: What's new in Grafana v9.0 and What's new in Grafana v8.0 . Bugzilla:2116847 grafana-pcp rebased to version 5.1.1 The grafana-pcp package has been rebased to version 5.1.1. Notable changes include: Query editor added buttons to disable rate conversion and time utilization conversion. Redis removed the deprecated label_values(metric, label) function. Redis fixed the network error for metrics with many series (requires Performance Co-Pilot v6+). Redis set the pmproxy API timeout to 1 minute. Bugzilla:2116848 Updated GCC Toolset 12 GCC Toolset 12 is a compiler toolset that provides recent versions of development tools. It is available as an Application Stream in the form of a Software Collection in the AppStream repository. Notable changes introduced in RHEL 9.2 include: The GCC compiler has been updated to version 12.2.1, which provides many bug fixes and enhancements that are available in upstream GCC. annobin has been updated to version 11.08. The following tools and versions are provided by GCC Toolset 12: Tool Version GCC 12.2.1 GDB 11.2 binutils 2.38 dwz 0.14 annobin 11.08 To install GCC Toolset 12, run the following command as root: To run a tool from GCC Toolset 12: To run a shell session where tool versions from GCC Toolset 12 override system versions of these tools: For more information, see GCC Toolset 12 . Bugzilla:2110583 The updated GCC compiler is now available for RHEL 9.2 The system GCC compiler, version 11.3.1, has been updated to include numerous bug fixes and enhancements available in the upstream GCC. The GNU Compiler Collection (GCC) provides tools for developing applications with the C, C++, and Fortran programming languages. For usage information, see Developing C and C++ applications in RHEL 9 . Bugzilla:2117632 LLVM Toolset rebased to version 15.0.7 LLVM Toolset has been updated to version 15.0.7. Notable changes include: The -Wimplicit-function-declaration and -Wimplicit-int warnings are enabled by default in C99 and newer. These warnings will become errors by default in Clang 16 and beyond. Bugzilla:2118567 Rust Toolset rebased to version 1.66.1 Rust Toolset has been updated to version 1.66.1. Notable changes include: The thread::scope API creates a lexical scope in which local variables can be safely borrowed by newly spawned threads, and those threads are all guaranteed to exit before the scope ends. The hint::black_box API adds a barrier to compiler optimization, which is useful for preserving behavior in benchmarks that might otherwise be optimized away. The .await keyword now makes conversions with the IntoFuture trait, similar to the relationship between for and IntoIterator . Generic associated types (GATs) allow traits to include type aliases with generic parameters, enabling new abstractions over both types and lifetimes. A new let - else statement allows binding local variables with conditional pattern matching, executing a divergent else block when the pattern does not match. Labeled blocks allow break statements to jump to the end of the block, optionally including an expression value. rust-analyzer is a new implementation of the Language Server Protocol, enabling Rust support in many editors. This replaces the former rls package, but you might need to adjust your editor configuration to migrate to rust-analyzer . Cargo has a new cargo remove subcommand for removing dependencies from Cargo.toml . Bugzilla:2123900 Go Toolset rebased to version 1.19.6 Go Toolset has been updated to version 1.19.6. Notable changes include: Security fixes to the following packages: crypto/tls mime/multipart net/http path/filepath Bug fixes to: The go command The linker The runtime The crypto/x509 package The net/http package The time package Bugzilla:2175173 The tzdata package now includes the /usr/share/zoneinfo/leap-seconds.list file Previously, the tzdata package only shipped the /usr/share/zoneinfo/leapseconds file. Some applications rely on the alternate format provided by the /usr/share/zoneinfo/leap-seconds.list file and, as a consequence, would experience errors. With this update, the tzdata package now includes both files, supporting applications that rely on either format. Bugzilla:2157982 4.13. Identity Management SSSD support for converting home directories to lowercase With this enhancement, you can now configure SSSD to convert user home directories to lowercase. This helps to integrate better with the case-sensitive nature of the RHEL environment. The override_homedir option in the [nss] section of the /etc/sssd/sssd.conf file now recognizes the %h template value. If you use %h as part of the override_homedir definition, SSSD replaces %h with the user's home directory in lowercase. Jira:RHELPLAN-139430 SSSD now supports changing LDAP user passwords with the shadow password policy With this enhancement, if you set ldap_pwd_policy to shadow in the /etc/sssd/sssd.conf file, LDAP users can now change their password stored in LDAP. Previously, password changes were rejected if ldap_pwd_policy was set to shadow as it was not clear if the corresponding shadow LDAP attributes were being updated. Additionally, if the LDAP server cannot update the shadow attributes automatically, set the ldap_chpass_update_last_change option to True in the /etc/sssd/sssd.conf file to indicate to SSSD to update the attribute. Bugzilla:1507035 IdM now supports the min_lifetime parameter With this enhancement, the min_lifetime parameter has been added to the /etc/gssproxy/*.conf file. The min_lifetime parameter triggers the renewal of a service ticket in case its remaining lifetime is lower than this value. By default its value is 15 seconds. For network volume clients such as NFS, to reduce the risk of losing access in case the KDC is momentarily unavailable, set this value to 60 seconds. Bugzilla:2184333 The ipapwpolicy ansible-freeipa module now supports new password policy options With this update, the ipapwpolicy module included in the ansible-freeipa package supports additional libpwquality library options: maxrepeat Specifies the maximum number of the same character in sequence. maxsequence Specifies the maximum length of monotonic character sequences ( abcd ). dictcheck Checks if the password is a dictionary word. usercheck Checks if the password contains the username. If any of the new password policy options are set, the minimum length of passwords is 6 characters. The new password policy settings are applied only to new passwords. In a mixed environment with RHEL 7 and RHEL 8 servers, the new password policy settings are enforced only on servers running on RHEL 8.4 and later. If a user is logged in to an IdM client and the IdM client is communicating with an IdM server running on RHEL 8.3 or earlier, then the new password policy requirements set by the system administrator do not apply. To ensure consistent behavior, upgrade all servers to RHEL 8.4 and later. Jira:RHELPLAN-137416 IdM now supports the ipanetgroup Ansible management module As an Identity Management (IdM) system administrator, you can integrate IdM with NIS domains and netgroups. Using the ipanetgroup ansible-freeipa module, you can achieve the following: You can ensure that an existing IdM netgroup contains specific IdM users, groups, hosts and host groups and nested IdM netgroups. You can ensure that specific IdM users, groups, hosts and host groups and nested IdM netgroups are absent from an existing IdM netgroup. You can ensure that a specific netgroup is present or absent in IdM. Jira:RHELPLAN-137411 New ipaclient_configure_dns_resolver and ipaclient_dns_servers Ansible ipaclient role variables specifying the client's DNS resolver Previously, when using the ansible-freeipa ipaclient role to install an Identity Management (IdM) client, it was not possible to specify the DNS resolver during the installation process. You had to configure the DNS resolver before the installation. With this enhancement, you can specify the DNS resolver when using the ipaclient role to install an IdM client with the ipaclient_configure_dns_resolver and ipaclient_dns_servers variables. Consequently, the ipaclient role modifies the resolv.conf file and the NetworkManager and systemd-resolved utilities to configure the DNS resolver on the client in a similar way that the ansible-freeipa ipaserver role does on the IdM server. As a result, configuring DNS when using the ipaclient role to install an IdM client is now more efficient. Note Using the ipa-client-install command-line installer to install an IdM client still requires configuring the DNS resolver before the installation. Jira:RHELPLAN-137406 Using the ipaclient role to install an IdM client with an OTP requires no prior modification of the Ansible controller Previously, the kinit command on the Ansible controller was a prerequisite for obtaining a one-time-password (OTP) for Identity Management (IdM) client deployment. The need to obtain the OTP on the controller was a problem for Red Hat Ansible Automation Platform (AAP), where the krb5-workstation package was not installed by default. With this update, the request for the administrator's TGT is now delegated to the first specified or discovered IdM server. As a result, you can now use an OTP to authorize the installation of an IdM client with no additional modification of the Ansible controller. This simplifies using the ipaclient role with AAP. Jira:RHELPLAN-137403 IdM now enforces the presence of the MS-PAC structure in Kerberos tickets Starting with RHEL 9.2, to increase security, Identity Management (IdM) and MIT Kerberos now enforce the presence of the Privilege Attribute Certificate (MS-PAC) structure in the Kerberos tickets issued by the RHEL IdM Kerberos Distribution Center (KDC). In November 2022, in response to CVE-2022-37967, Microsoft introduced an extended signature that is calculated over the whole MS-PAC structure rather than over the server checksum. Starting with RHEL 9.2, the Kerberos tickets issued by IdM KDC now also contain the extended signature. Note The presence of the extended signature is not yet enforced in IdM. Jira:RHELPLAN-159146 New realm configuration template for KDC enabling FIPS 140-3-compliant key encryption This update provides a new, EXAMPLE.COM , example realm configuration in the /var/kerberos/krb5kdc/kdc.conf file. It brings two changes: The FIPS 140-3-compliant AES HMAC SHA-2 family is added to the list of supported types for key encryption. The encryption type of the KDC master key is switched from AES 256 HMAC SHA-1 to AES 256 HMAC SHA-384 . Warning This update is about standalone MIT realms. Do not change the Kerberos Distribution Center (KDC) configuration in RHEL Identity Management. Using this configuration template is recommended for new realms. The template does not affect any realm already deployed. If you are planning to upgrade the configuration of your realm according to the template, consider the following points: For upgrading the master key, changing the setting in the KDC configuration is not enough. Follow the process described in the MIT Kerberos documentation: https://web.mit.edu/kerberos/krb5-1.20/doc/admin/database.html#updating-the-master-key Adding the AES HMAC SHA-2 family to the supported types for key encryption is safe at any point because it does not affect existing entries in the KDC. Keys will be generated only when creating new principals or when renewing credentials. Note that keys of this new type cannot be generated based on existing keys. To make these new encryption types available for a certain principal, its credentials have to be renewed, which means renewing keytabs for service principals too. The only case where principals should not feature an AES HMAC SHA-2 key is the Active Directory (AD) cross-realm ticket-granting ticket (TGT) ones. Because AD does not implement RFC8009, it does not use the AES HMAC SHA-2 encryption types family. Therefore, a cross-realm TGS-REQ using an AES HMAC SHA-2 -encrypted cross-realm TGT would fail. The best way to keep the MIT Kerberos client from using AES HMAC SHA-2 against AD is to not provide AES HMAC SHA-2 keys for the AD cross-realm principals. To do so, ensure that you create the cross-realm TGT entries with an explicit list of key encryption types that are all supported by AD: To ensure the MIT Kerboros clients use the AES HMAC SHA-2 encryption types, you must also set these encryption types as permitted in both the client and the KDC configuration. On RHEL, this setting is managed by the crypto-policy system. For example, on RHEL 9, hosts using the DEFAULT crypto-policy allow AES HMAC SHA-2 and AES HMAC SHA-1 encrypted tickets, while hosts using the FIPS crypto-policy only accept AES HMAC SHA-2 ones. Bugzilla:2068535 Configure pam_pwhistory using a configuration file With this update, you can configure the pam_pwhistory module in the /etc/security/pwhistory.conf configuration file. The pam_pwhistory module saves the last password for each user in order to manage password change history. Support has also been added in authselect which allows you to add the pam_pwhistory module to the PAM stack. Bugzilla:2126640 , Bugzilla:2142805 IdM now supports new Active Directory certificate mapping templates Active Directory (AD) domain administrators can manually map certificates to a user in AD using the altSecurityIdentities attribute. There are six supported values for this attribute, though three mappings are now considered insecure. As part of May 10,2022 security update , once this update is installed on a domain controller, all devices are in compatibility mode. If a certificate is weakly mapped to a user, authentication occurs as expected but a warning message is logged identifying the certificates that are not compatible with full enforcement mode. As of November 14, 2023 or later, all devices will be updated to full enforcement mode and if a certificate fails the strong mapping criteria, authentication will be denied. IdM now supports the new mapping templates, making it easier for an AD administrator to use the new rules and not maintain both. IdM now supports the following new mapping templates : Serial Number: LDAPU1:(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<SR>{serial_number!hex_ur}) Subject Key Id: LDAPU1:(altSecurityIdentities=X509:<SKI>{subject_key_id!hex_u}) User SID: LDAPU1:(objectsid={sid}) If you do not want to reissue certificates with the new SID extension, you can create a manual mapping by adding the appropriate mapping string to a user's altSecurityIdentities attribute in AD. Bugzilla:2087247 samba rebased to version 4.17.5 The samba packages have been upgraded to upstream version 4.17.5, which provides bug fixes and enhancements over the version. The most notable changes: Security improvements in releases impacted the performance of the Server Message Block (SMB) server for high meta data workloads. This update improves he performance in this scenario. The --json option was added to the smbstatus utility to display detailed status information in JSON format. The samba.smb.conf and samba.samba3.smb.conf modules have been added to the smbconf Python API. You can use them in Python programs to read and, optionally, write the Samba configuration natively. Note that the server message block version 1 (SMB1) protocol is deprecated since Samba 4.11 and will be removed in a future release. Back up the database files before starting Samba. When the smbd , nmbd , or winbind services start, Samba automatically updates its tdb database files. Red Hat does not support downgrading tdb database files. After updating Samba, use the testparm utility to verify the /etc/samba/smb.conf file. For further information about notable changes, read the upstream release notes before updating. Bugzilla:2131993 ipa-client-install now supports authentication with PKINIT Previously, the ipa-client-install supported only password based authentication. This update provides support to ipa-client-install for authentication with PKINIT. For example: To use the PKINIT authentication, you must establish trust between IdM and the CA chain of the PKINIT certificate. For more information see the ipa-cacert-manage(1) man page. Also, the certificate identity mapping rules must map the PKINIT certificate of the host to a principal that has permission to add or modify a host record. For more information see the ipa certmaprule-add man page. Bugzilla:2143224 Red Hat IdM and Certificate System now support the EST protocol Enrollment over Secure Transport (EST) is a new Certificate System subsystem feature that is specified in RFC 7030 and it is used to provision certificates from a Certificate Authority (CA). EST implements the server side of the operation, such as /getcacerts , /simpleenroll , and /simplereenroll . Note that Red Hat supports both EST and the original Simple Certificate Enrollment Protocol (SCEP) in Certificate System. Bugzilla:1849834 Enhance negative cache usage This update improves the SSSD performance for lookups by Security Identifier (SID). It now stores non-existing SIDs in the negative cache for individual domains and requests the domain that the SID belongs to. Bugzilla:1766490 Directory server now supports ECDSA private keys for TLS Previously, you could not use cryptographic algorithms that are stronger than RSA to secure Directory Server connections. With this enhancement, Directory Server now supports both ECDSA and RSA keys. Bugzilla:2096795 Directory Server now supports extended logging of search operations Previously, records in the access log did not show why some search operations had a very big etime value. With this release, you can enable logging of statistics such as a number of index lookups (database read operations) and overall duration of index lookups per each search operation. These statistical records can help to analyze why the etime value can be so resource expensive. Bugzilla:1859271 The NUNC_STANS error logging level was replaced by the new 1048576 logging level Previously, you could not easily debug password policy issues. With the new 1048576 logging level for the error log, you can now check the following password policy information: Which local policy rejects or allows a password update. The exact syntax violation. Bugzilla:2057070 Directory Server introduces the security log To properly track issues over time, Directory Server now has a specialized log that maintains security data. The security log does not rotate quickly and consumes less disk resources in comparison to the access log that has all the information, but requires expensive parsing to get the security data. The new server log records security events such as authentication events, authorization issues, DoS/TCP attacks, and other events. Directory Server stores the security log in the /var/log/dirsrv/slapd- instance_name / directory along with other log files. Bugzilla:2093981 Directory Server now can compress archived log files Previously, archived log files were not compressed. With this release, you can enable access, error, audit, audit fail log, security log files compression to save disk space. Note that only security log file compression is enabled by default. Use the following new configuration attributes in the cn=config entry to manage the compression: nsslapd-accesslog-compress for the access log nsslapd-errorlog-compress for the error log nsslapd-auditlog-compress for the audit log nsslapd-auditfaillog-compress for the audit fail log nsslapd-securelog-compress for the security log Bugzilla:1132524 New pamModuleIsThreadSafe configuration option is now available When a PAM module is thread-safe, you can improve the PAM authentication throughput and response time of that specific module, by setting the new pamModuleIsThreadSafe configuration option to yes : This configuration applies on the PAM module configuration entry (child of cn=PAM Pass Through Auth,cn=plugins,cn=config ). Use pamModuleIsThreadSafe option in the dse.ldif configuration file or the ldapmodify command. Note that the ldapmodify command requires you to restart the server. Bugzilla:2142639 Directory Server can now import a certificate bundle Previously, when you tried to add a certificate bundle by using the dsconf or dsctl utility, the procedure failed with an error, and the certificate bundle was not imported. Such behavior was caused by the certutil utility that could import only one certificate at a time. With this update, Directory Server works around the issue with the certutil , and a certificate bundle is added successfully. Bugzilla:1878808 Default behavior change: Directory Server now returns a DN in exactly the same spelling as it was added to the database With the new nsslapd-return-original-entrydn parameter under the cn=config entry, you can manage how Directory Server returns the distinguished name (DN) of entries during search operations. By default, the nsslapd-return-original-entrydn parameter is set to on , and Directory Server returns the DN exactly how it was originally added to the database. For example, you added or modified the entry uid=User,ou=PEople,dc=ExaMPlE,DC=COM , and with the setting turned on, Directory Server returns the same spelling of the DN for the entry: uid=User,ou=PEople,dc=ExaMPlE,DC=COM . When the nsslapd-return-original-entrydn parameter is set to off , Directory Server generates the entry DN by putting together a Relative DN (RDN) of the entry and the base DN that is stored in the database suffix configuration under cn=userroot,cn=ldbm database,cn=plugins,cn=config . If you set the base DN to ou=people,dc=example,dc=com , and the nsslapd-return-original-entrydn setting is off , Directory Server returns uid=User,ou=people,dc=example,dc=com during searches and not the spelling of the DN when you added the entry to the database. Bugzilla:2075017 MIT Kerberos supports the Ticket and Extended KDC MS-PAC signatures With this update, MIT Kerberos, which is used by Red Hat, implements support for two types of the Privilege Attribute Certificate (PAC) signatures introduced by Microsoft in response to recent CVEs. Specifically, the following signatures are supported: Ticket signature Released in KB4598347 Addressing CVE-2020-17049 , also known as the "Bronze-Bit" attack Extended KDC signature Released in KB5020805 Addressing CVE-2022-37967 See also RHSA-2023:2570 and krb5-1.20.1-6.el9 . Bugzilla:2165827 New nsslapd-auditlog-display-attrs configuration parameter for the Directory Server audit log Previously, the distinguished name (DN) was the only way to identify the target entry in the audit log event. With the new nsslapd-auditlog-display-attrs parameter, you can configure Directory Server to display additional attributes in the audit log, providing more details about the modified entry.. For example, if you set the nsslapd-auditlog-display-attrs parameter to cn , the audit log displays the entry cn attribute in the output. To include all attributes of a modified entry, use an asterisk ( * ) as the parameter value. For more information, see nsslapd-auditlog-display-attrs . Bugzilla:2136610 4.14. Desktop Disable swipe to switch workspaces Previously, swiping up or down with three fingers always switched the workspace on a touch screen. With this release, you can disable the workspace switching. For details, see Disabling swipe to switch workspaces . Bugzilla:2154358 Wayland is now enabled on Aspeed GPUs Previously, the Aspeed GPU driver did not perform well enough to run a Wayland session. To work around that problem, the Wayland session was disabled for Aspeed GPUs. With this release, the driver performance has been significantly improved and the Wayland session is now responsive. As a result, the Wayland session is now enabled on Aspeed GPUs by default. Bugzilla:2131203 Custom right-click menu on the desktop You can now customize the menu that opens when you right-click the desktop background. You can create custom entries in the menu that run arbitrary commands. To customize the menu, see Customizing the right-click menu on the desktop . Bugzilla:2160553 4.15. The web console Certain cryptographic subpolicies are now available in the web console This update of the RHEL web console extends the options in the Change crypto policy dialog. Besides the four system-wide cryptographic policies, you can also apply the following subpolicies through the graphical interface now: DEFAULT:SHA1 is the DEFAULT policy with the SHA-1 algorithm enabled. LEGACY:AD-SUPPORT is the LEGACY policy with less secure settings that improve interoperability for Active Directory services. FIPS:OSPP is the FIPS policy with further restrictions inspired by the Common Criteria for Information Technology Security Evaluation standard. Jira:RHELPLAN-137505 The web console now performs additional steps for binding LUKS-encrypted root volumes to NBDE With this update, the RHEL web console performs additional steps required for binding LUKS-encrypted root volumes to Network-Bound Disk Encryption (NBDE) deployments. After you select an encrypted root file system and a Tang server, you can skip adding the rd.neednet=1 parameter to the kernel command line, installing the clevis-dracut package, and regenerating an initial ramdisk ( initrd ). For non-root file systems, the web console now enables the remote-cryptsetup.target and clevis-luks-akspass.path systemd units, installs the clevis-systemd package, and adds the _netdev parameter to the fstab and crypttab configuration files. As a result, you can now use the graphical interface for all Clevis-client configuration steps when creating NBDE deployments for automated unlocking of LUKS-encrypted root volumes. Jira:RHELPLAN-139125 4.16. Red Hat Enterprise Linux system roles Routing rule is able to look up a route table by its name With this update, the rhel-system-roles.network RHEL system role supports looking up a route table by its name when you define a routing rule. This feature provides quick navigation for complex network configurations where you need to have different routing rules for different network segments. Bugzilla:2131293 The network system role supports setting a DNS priority value This enhancement adds the dns_priority parameter to the RHEL network system role. You can set this parameter to a value from -2147483648 to 2147483647 . The default value is 0 . Lower values have a higher priority. Note that negative values cause the system role to exclude other configurations with a greater numeric priority value. Consequently, in presence of at least one negative priority value, the system role uses only DNS servers from connection profiles with the lowest priority value. As a result, you can use the network system role to define the order of DNS servers in different connection profiles. Bugzilla:2133858 New IPsec customization parameters for the vpn RHEL system role Because certain network devices require IPsec customization to work correctly, the following parameters have been added to the vpn RHEL system role: Important Do not change the following parameters without advanced knowledge. Most scenarios do not require their customization. Furthermore, for security reasons, encrypt a value of the shared_key_content parameter by using Ansible Vault. Tunnel parameters: shared_key_content ike esp ikelifetime salifetime retransmit_timeout dpddelay dpdtimeout dpdaction leftupdown Per-host parameters: leftid rightid As a result, you can use the vpn role to configure IPsec connectivity to a wide range of network devices. Bugzilla:2119102 The selinux RHEL system role now supports the local parameter This update of the selinux RHEL system role introduces support for the local parameter. By using this parameter, you can remove only your local policy modifications and preserve the built-in SELinux policy. Bugzilla:2128843 The ha_cluster system role now supports automated execution of the firewall , selinux , and certificate system roles The ha_cluster RHEL system role now supports the following features: Using the firewall and selinux system roles to manage port access To configure the ports of a cluster to run the firewalld and selinux services, you can set the new role variables ha_cluster_manage_firewall and ha_cluster_manage_selinux to true . This configures the cluster to use the firewall and selinux system roles, automating and performing these operations within the ha_cluster system role. If these variables are set to their default value of false , the roles are not performed. With this release, the firewall is no longer configured by default, because it is configured only when ha_cluster_manage_firewall is set to true . Using the certificate system role to create a pcsd private key and certificate pair The ha_cluster system role now supports the ha_cluster_pcsd_certificates role variable. Setting this variable passes on its value to the certificate_requests variable of the certificate system role. This provides an alternative method for creating the private key and certificate pair for pcsd . Bugzilla:2130010 The postfix RHEL system role can now use the firewall and selinux RHEL system roles to manage port access With this enhancement, you can automate managing port access by using the new role variables postfix_manage_firewall and postfix_manage_selinux : If they are set to true , each role is used to manage the port access. If they are set to false , which is default, the roles do not engage. Bugzilla:2130329 The vpn RHEL system role can now use the firewall and selinux roles to manage port access With this enhancement, you can automate managing port access in the vpn RHEL system role through the firewall and selinux roles. If you set the new role variables vpn_manage_firewall and vpn_manage_selinux to true , the roles manage port access. Bugzilla:2130344 The logging RHEL system role now supports port access and generation of the certificates With this enhancement, you can use the logging role to manage ports access and generate certificates with new role variables. If you set the new role variables logging_manage_firewall and logging_manage_selinux to true , the roles manage port access. The new role variable for generating certificates is logging_certificates . The type and usage are the same as the certificate role certificate_requests . You can now automate these operations directly by using the logging role. Bugzilla:2130357 The metrics RHEL system role now can use the firewall role and the selinux role to manage port access With this enhancement, you can control access to ports. If you set the new role variables metrics_manage_firewall and metrics_manage_firewall to true , the roles manage port access. You can now automate and perform these operations directly by using the metrics role. Bugzilla:2133528 The nbde_server RHEL system role now can use the firewall and selinux roles to manage port access With this enhancement, you can use the firewall and selinux roles to manage ports access. If you set the new role variables nbde_server_manage_firewall and nbde_server_manage_selinux to true , the roles manage port access. You can now automate these operations directly by using the nbde_server role. Bugzilla:2133930 The initscripts network provider supports route metric configuration of the default gateway With this update, you can use the initscripts network provider in the rhel-system-roles.network RHEL system role to configure the route metric of the default gateway. The reasons for such a configuration could be: Distributing the traffic load across the different paths Specifying primary routes and backup routes Leveraging routing policies to send traffic to specific destinations through specific paths Bugzilla:2134202 The cockpit RHEL system role integration with the firewall , selinux , and certificate roles This enhancement enables you to integrate the cockpit role with the firewall role and the selinux role to manage port access and the certificate role to generate certificates. To control the port access, use the new cockpit_manage_firewall and cockpit_manage_selinux variables. Both variables are set to false by default and are not executed. Set them to true to allow the firewall and selinux roles to manage the RHEL web console service port access. The operations will then be executed within the cockpit role. Note that you are responsible for managing port access for firewall and SELinux. To generate certificates, use the new cockpit_certificates variable. The variable is set to false by default and is not executed. You can use this variable the same way you would use the certificate_request variable in the certificate role. The cockpit role will then use the certificate role to manage the RHEL web console certificates. Bugzilla:2137663 New RHEL system role for direct integration with Active Directory The new rhel-system-roles.ad_integration RHEL system role was added to the rhel-system-roles package. As a result, administrators can now automate direct integration of a RHEL system with an Active Directory domain. Bugzilla:2140795 New Ansible Role for Red Hat Insights and subscription management The rhel-system-roles package now includes the remote host configuration ( rhc ) system role. This role enables administrators to easily register RHEL systems to Red Hat Subscription Management (RHSM) and Satellite servers. By default, when you register a system by using the rhc system role, the system connects to Red Hat Insights. With the new rhc system role, administrators can now automate the following tasks on the managed nodes: Configure the connection to Red Hat Insights, including automatic update, remediations, and tags for the system. Enable and disable repositories. Configure the proxy to use for the connection. Set the release of the system. For more information about how to automate these tasks, see Using the RHC system role to register the system . Bugzilla:2141330 Added support for the cloned MAC address Cloned MAC address is the MAC address of the device WAN port which is the same as the MAC address of the machine. With this update, users can specify the bonding or bridge interface with the MAC address or the strategy such as random or preserve to get the default MAC address for the bonding or bridge interface. Bugzilla:2143768 Microsoft SQL Server Ansible role supports asynchronous high availability replicas Previously, Microsoft SQL Server Ansible role supported only primary, synchronous, and witness high availability replicas. Now, you can set the mssql_ha_replica_type variable to asynchronous to configure it with asynchronous replica type for a new or existing replica. Bugzilla:2151282 Microsoft SQL Server Ansible role supports the read-scale cluster type Previously, Microsoft SQL Ansible role supported only the external cluster type. Now, you can configure the role with a new variable mssql_ha_ag_cluster_type . The default value is external , use it to configure the cluster with Pacemaker. To configure the cluster without Pacemaker, use the value none for that variable. Bugzilla:2151283 Microsoft SQL Server Ansible role can generate TLS certificates Previously, you needed to generate a TLS certificate and a private key on the nodes manually before configuring the Microsoft SQL Ansible role. With this update, the Microsoft SQL Server Ansible role can use the redhat.rhel_system_roles.certificate role for that purpose. Now, you can set the mssql_tls_certificates variable in the format of the certificate_requests variable of the certificate role to generate a TLS certificate and a private key on the node. Bugzilla:2151284 Microsoft SQL Server Ansible role supports configuring SQL Server version 2022 Previously, Microsoft SQL Ansible role supported only configuring SQL Server version 2017 and version 2019. This update provides you with the support for SQL Server version 2022 for Microsoft SQL Ansible role. Now, you can set mssql_version value to 2022 for configuring a new SQL Server 2022 or upgrading SQL Server from version 2019 to version 2022. Note that upgrade of an SQL Server from version 2017 to version 2022 is unavailable. Bugzilla:2153428 Microsoft SQL Server Ansible role supports configuration of the Active Directory authentication With this update, the Microsoft SQL Ansible role supports configuration of the Active Directory authentication for an SQL Server. Now, you can configure the Active Directory authentication by setting variables with the mssql_ad_ prefix. Bugzilla:2163709 The journald RHEL system role is now available The journald service collects and stores log data in a centralized database. With this enhancement, you can use the journald system role variables to automate the configuration of the systemd journal, and configure persistent logging by using the Red Hat Ansible Automation Platform. Bugzilla:2165175 The ha_cluster system role now supports quorum device configuration A quorum device acts as a third-party arbitration device for a cluster. A quorum device is recommended for clusters with an even number of nodes. With two-node clusters, the use of a quorum device can better determine which node survives in a split-brain situation. You can now configure a quorum device with the ha_cluster system role, both qdevice for a cluster and qnetd for an arbitration node. Bugzilla:2140804 4.17. Virtualization Hardware cryptographic devices can now be automatically hot-plugged Previously, it was only possible to define cryptographic devices for passthrough if they were present on the host before the mediated device was started. Now, you can define a mediated device matrix that lists all the cryptographic devices that you want to pass through to your virtual machine (VM). As a result, the specified cryptographic devices are automatically passed through to the running VM if they become available later. Also, if the devices become unavailable, they are removed from the VM, but the guest operating system keeps running normally. Bugzilla:1871126 Improved performance for PCI passthrough devices on IBM Z With this update, the PCI passthrough implementation on IBM Z hardware has been enhanced through multiple improvements to I/O handling. As a result, PCI devices passed through to KVM virtual machines (VMs) on IBM Z hosts now have significantly better performance. In addition, ISM devices can now be assigned to VMs on IBM Z hosts. Bugzilla:1871143 New package: passt This update adds the passt package, which makes it possible to use the passt user-mode networking back end for virtual machines. For more information on using passt , see Configuring the passt user-space connection . Bugzilla:2131015 zPCI device assignment It is now possible to attach zPCI devices as pass-through devices to virtual machines (VMs) hosted on RHEL running on IBM Z hardware. For example, this enables the use of NVMe flash drives in VMs. Jira:RHELPLAN-59528 New package: python-virt-firmware This update adds the python-virt-firmware package, which contains tools for handling Open Virtual Machine Firmware (OVMF) firmware images. You can use these tools for example for the following: Printing the content of firmware images Updating the edk2 variables store Handling secure boot key enrolment without booting up the virtual machine in QEMU As a result, these make it easier to build OVMF images. Bugzilla:2089785 4.18. Supportability The sos utility is moving to a 4-week update cadence Instead of releasing sos updates with RHEL minor releases, the sos utility release cadence is changing from 6 months to 4 weeks. You can find details about the updates for the sos package in the RPM changelog every 4 weeks or you can read a summary of sos updates in the RHEL Release Notes every 6 months. Bugzilla:2164987 The sos clean command now obfuscates IPv6 addresses Previously, the sos clean command did not obfuscate IPv6 addresses, leaving some customer-sensitive data in the collected sos report. With this update, sos clean detects and obfuscates IPv6 addresses as expected. Bugzilla:2134906 4.19. Containers New podman RHEL System Role is now available Beginning with Podman 4.2, you can use the podman System Role to manage Podman configuration, containers, and systemd services that run Podman containers. Jira:RHELPLAN-118705 Podman now supports events for auditing Beginning with Podman v4.4, you can gather all relevant information about a container directly from a single event and journald entry. To enable Podman auditing, modify the container.conf configuration file and add the events_container_create_inspect_data=true option to the [engine] section. The data is in JSON format, the same as from the podman container inspect command. For more information, see How to use new container events and auditing features in Podman 4.4 . Jira:RHELPLAN-136602 The container-tools meta-package has been updated The container-tools RPM meta-package, which contains the Podman, Buildah, Skopeo, crun and runc tools are now available. This update applies a series of bug fixes and enhancements over the version. Notable changes in Podman v4.4 include: Introduce Quadlet, a new systemd-generator that easily creates and maintains systemd services using Podman. A new command, podman network update , has been added, which updates networks for containers and pods. A new command, podman buildx version , has been added, which shows the buildah version. Containers can now have startup healthchecks, allowing a command to be run to ensure the container is fully started before the regular healthcheck is activated. Support a custom DNS server selection using the podman --dns command. Creating and verifying sigstore signatures using Fulcio and Rekor is now available. Improved compatibility with Docker (new options and aliases). Improved Podman's Kubernetes integration - the commands podman kube generate and podman kube play are now available and replace the podman generate kube and podman play kube commands. The podman generate kube and podman play kube commands are still available but it is recommended to use the new podman kube commands. Systemd-managed pods created by the podman kube play command now integrate with sd-notify, using the io.containers.sdnotify annotation (or io.containers.sdnotify/USDname for specific containers). Systemd-managed pods created by podman kube play can now be auto-updated, using the io.containers.auto-update annotation (or io.containers.auto-update/USDname for specific containers). Podman has been upgraded to version 4.4, for further information about notable changes, see upstream release notes . Jira:RHELPLAN-136607 Aardvark and Netavark now support custom DNS server selection The Aardvark and Netavark network stack now support custom DNS server selection for containers instead of the default DNS servers on the host. You have two options for specifying the custom DNS server: Add the dns_servers field in the containers.conf configuration file. Use the new --dns Podman option to specify an IP address of the DNS server. The --dns option overrides the values in the container.conf file. Jira:RHELPLAN-138024 Skopeo now supports generating sigstore key pairs You can use the skopeo generate-sigstore-key command to generate a sigstore public/private key pair. For more information, see skopeo-generate-sigstore-key man page. Jira:RHELPLAN-151481 Toolbox is now available With the toolbox utility, you can use the containerized command-line environment without installing troubleshooting tools directly on your system. Toolbox is built on top of Podman and other standard container technologies from OCI. For more information, see toolbx . Jira:RHELPLAN-150266 Container images now have a two-digit tag In RHEL 9.0 and RHEL 9.1, container images had a three-digit tag. Starting from RHEL 9.2, container images now have a two-digit tag. Jira:RHELPLAN-147982 The capability for multiple trusted GPG keys for signing images is available The /etc/containers/policy.json file supports a new keyPaths field which accepts a list of files containing the trusted keys. Because of this, the container images signed with Red Hat's General Availability and Beta GPG keys are now accepted in the default configuration. For example: Jira:RHELPLAN-129327 Podman now supports the pre-execution hooks The root-owned plugin scripts located in the /usr/libexec/podman/pre-exec-hooks and /etc/containers/pre-exec-hooks directories define a fine-control over container operations, especially blocking unauthorized actions. The /etc/containers/podman_preexec_hooks.txt file must be created by an administrator and can be empty. If /etc/containers/podman_preexec_hooks.txt does not exist, the plugin scripts will not be executed. If all plugin scripts return zero value, then the podman command is executed, otherwise, the podman command exits with the inherited exit code. Red Hat recommends using the following naming convention to execute the scripts in the correct order: DDD- plugin_name . lang , for example 010-check-group.py . Note that the plugin scripts are valid at the time of creation. Containers created before plugin scripts are not affected. Bugzilla:2119200 The sigstore signatures are now available Beginning with Podman 4.2, you can use the sigstore format of container image signatures. The sigstore signatures are stored in the container registry together with the container image without the need to have a separate signature server to store image signatures. Jira:RHELPLAN-74672 Toolbox can create RHEL 9 containers Previously, the Toolbox utility only supported RHEL UBI 8 images. With this release, Toolbox now also supports RHEL UBI 9. As a result, you can create a Toolbox container based on RHEL 8 or 9. The following command creates a RHEL container based on the same RHEL release as your host system: Alternatively, you can create a container with a specific RHEL release. For example, to create a container based on RHEL 9.2, use the following command: Bugzilla:2163752 New package: passt This update adds the passt package, which makes it possible to use the pasta rootless networking back end for containers. In comparison to the Slirp connection, which is currently used as default for unprivileged networking by Podman, pasta provides the following enhancements: Improved throughput and better support for IPv6, which includes support for the Neighbor Discovery Protocol (NDP) and for DHCPv6 The ability to configure port forwarding of TCP and UDP ports on IPv6 To use pasta to connect a Podman container, use the --network pasta command-line option. Bugzilla:2209419
|
[
"[[packages]] name = \"microshift\" version = \"*\" [customizations.services] enabled = [\"microshift\"] [[customizations.firewall.zones]] name = \"trusted\" sources = [\"10.42.0.0/16\", \"169.254.169.1\"]",
"dns-resolver: config: server: - fe80::deef:1%enp1s0",
"vdpa dev vstats show vdpa-a qidx 1 vdpa-a: vdpa-a: queue_type tx received_desc 321812 completed_desc 321812",
"vdpa dev vstats show vdpa-a qidx 16 vdpa-a: queue_type control_vq received_desc 17 completed_desc 17",
"Possible SYN flooding on port <ip_address>:<port>.",
"--- interfaces: - name: eth1.101 type: vlan state: up vlan: base-iface: eth1 id: 101 registration-protocol: mvrp loose-binding: true reorder-headers: true",
"tuna <command> [-S CPU_SOCKET_LIST]",
"rteval --summarize rteval-<date>-N.tar.bz2",
"oslat -b 32 -D 10s -W 100 -z -c 1-4",
"dnf install python3.11 dnf install python3.11-pip",
"python3.11 python3.11 -m pip --help",
"dnf module install nginx:1.22",
"SELECT ('{ \"postgres\": { \"release\": 15 }}'::jsonb)['postgres']['release'];",
"postgres=# CREATE USER mydbuser; postgres=# GRANT ALL ON SCHEMA public TO mydbuser; postgres=# \\c postgres mydbuser postgres=USD CREATE TABLE mytable (id int);",
"dnf module install postgresql:15",
"dnf module install swig:4.1",
"dnf install gcc-toolset-12",
"scl enable gcc-toolset-12 tool",
"scl enable gcc-toolset-12 bash",
"kadmin.local <<EOF add_principal +requires_preauth -e aes256-cts-hmac-sha1-96,aes128-cts-hmac-sha1-96 -pw [password] krbtgt/[MIT realm]@[AD realm] add_principal +requires_preauth -e aes256-cts-hmac-sha1-96,aes128-cts-hmac-sha1-96 -pw [password] krbtgt/[AD realm]@[MIT realm] EOF",
"ipa-client-install --pkinit-identity=FILE:/path/to/cert.pem,/path/to/key.pem --pkinit-anchor=FILE:/path/to/cacerts.pem",
"pamModuleIsThreadSafe: yes",
"\"registry.redhat.io\": [ { \"type\": \"signedBy\", \"keyType\": \"GPGKeys\", \"keyPaths\": [\"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release\", \"/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-beta\"] } ]",
"toolbox create",
"toolbox create --distro rhel --release 9.2"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.2_release_notes/new-features
|
Chapter 4. Role-based access reference
|
Chapter 4. Role-based access reference The operations that an authenticated user is allowed to perform depend on the role (or roles) assigned to that user, as listed in Table 4.1, "Role-based access on Karaf standalone" . Table 4.1. Role-based access on Karaf standalone Operation admin manager viewer Log in/Log out Y Y Y View Help topics Y Y Y Set user preferences Y Y Y Connect Discover and connect to remote integrations Y Y Y Discover and connect to local integrations Y Y Y Camel View all running Camel applications Y Y Y Start, suspend, resume, and delete Camel contexts Y Y Send messages Y Y Add endpoints Y Y View routes, route diagrams, and runtime statistics Y Y Y Start and stop routes Y Y Delete routes Y Y JMX Change attribute values Y Y Select and view attributes in a time-based chart Y Y Y View operations Y Y Y OSGI View bundles, features, packages, services, servers, framework, and configurations Y Y Y Add and delete bundles Y Y Add configurations Y Y Install and uninstall features Y Runtime View system properties, metrics, and threads Y Y Y Logs View logs Y Y Y Additional resources For more information on role-based access control, see Deploying into Apache Karaf .
| null |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_karaf_standalone/r_fuse-console-security-karaf-user-access
|
6.12. Virtual Machine Resources
|
6.12. Virtual Machine Resources Virtual machine resources are configured differently than other cluster resources. In particular, they are not grouped into service definitions. As of the Red Hat Enterprise Linux 6.2 release, when you configure a virtual machine in a cluster with the ccs command you can use the --addvm (rather than the addservice option). This ensures that the vm resource is defined directly under the rm configuration node in the cluster configuration file. A virtual machine resource requires at least a name and a path attribute. The name attribute should match the name of the libvirt domain and the path attribute should specify the directory where the shared virtual machine definitions are stored. Note The path attribute in the cluster configuration file is a path specification or a directory name, not a path to an individual file. If virtual machine definitions are stored on a shared directory named /mnt/vm_defs , the following command will define a virtual machine named guest1 : Running this command adds the following line to the rm configuration node in the cluster.conf file:
|
[
"ccs -h node1.example.com --addvm guest1 path=/mnt/vm_defs",
"<vm name=\"guest1\" path=\"/mnt/vm_defs\"/>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-virt_machine_resources-ccs-ca
|
23.3. I/O Standards
|
23.3. I/O Standards This section describes I/O standards used by ATA and SCSI devices. ATA ATA devices must report appropriate information via the IDENTIFY DEVICE command. ATA devices only report I/O parameters for physical_block_size , logical_block_size , and alignment_offset . The additional I/O hints are outside the scope of the ATA Command Set. SCSI I/O parameters support in Red Hat Enterprise Linux 7 requires at least version 3 of the SCSI Primary Commands (SPC-3) protocol. The kernel will only send an extended inquiry (which gains access to the BLOCK LIMITS VPD page) and READ CAPACITY(16) command to devices which claim compliance with SPC-3. The READ CAPACITY(16) command provides the block sizes and alignment offset: LOGICAL BLOCK LENGTH IN BYTES is used to derive /sys/block/ disk /queue/physical_block_size LOGICAL BLOCKS PER PHYSICAL BLOCK EXPONENT is used to derive /sys/block/ disk /queue/logical_block_size LOWEST ALIGNED LOGICAL BLOCK ADDRESS is used to derive: /sys/block/ disk /alignment_offset /sys/block/ disk / partition /alignment_offset The BLOCK LIMITS VPD page ( 0xb0 ) provides the I/O hints. It also uses OPTIMAL TRANSFER LENGTH GRANULARITY and OPTIMAL TRANSFER LENGTH to derive: /sys/block/ disk /queue/minimum_io_size /sys/block/ disk /queue/optimal_io_size The sg3_utils package provides the sg_inq utility, which can be used to access the BLOCK LIMITS VPD page. To do so, run:
|
[
"sg_inq -p 0xb0 disk"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/iolimstandards
|
Chapter 2. Obtaining a manifest file
|
Chapter 2. Obtaining a manifest file You can obtain a subscription manifest in the Subscription Allocations section of Red Hat Subscription Management. After you obtain a subscription allocation, you can download its manifest file and upload it to activate Ansible Automation Platform. To begin, login to the Red Hat Customer Portal using your administrator user account and follow the procedures in this section. 2.1. Create a subscription allocation Creating a new subscription allocation allows you to set aside subscriptions and entitlements for a system that is currently offline or air-gapped. This is necessary before you can download its manifest and upload it to Ansible Automation Platform. Procedure From the Subscription Allocations page, click New Subscription Allocation . Enter a name for the allocation so that you can find it later. Select Type: Satellite 6.8 as the management application. Click Create . 2.2. Adding subscriptions to a subscription allocation Once an allocation is created, you can add the subscriptions you need for Ansible Automation Platform to run properly. This step is necessary before you can download the manifest and add it to Ansible Automation Platform. Procedure From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to add a subscription. Click the Subscriptions tab. Click Add Subscriptions . Enter the number of Ansible Automation Platform Entitlement(s) you plan to add. Click Submit . Verification After your subscription has been accepted, subscription details are displayed. A status of Compliant indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as Out of Compliance , indicating you have exceeded the number of hosts in your subscription. Other important information displayed include the following: Hosts automated Host count automated by the job, which consumes the license count Hosts imported Host count considering all inventory sources (does not impact hosts remaining) Hosts remaining Total host count minus hosts automated 2.3. Downloading a manifest file After an allocation is created and has the appropriate subscriptions on it, you can download the manifest from Red Hat Subscription Management. Procedure From the Subscription Allocations page, click on the name of the Subscription Allocation to which you would like to generate a manifest. Click the Subscriptions tab. Click Export Manifest to download the manifest file. Note The file is saved to your default downloads folder and can now be uploaded to activate Red Hat Ansible Automation Platform .
| null |
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files
|
2. Introduction
|
2. Introduction The following topics are covered in this document: Installation-Related Notes Feature Updates Kernel-Related Updates Driver Updates Technology Previews Resolved Issues Known Issues 2.1. Lifecycle The Red Hat Enterprise Linux 4 Life Cycle is available at: https://www.redhat.com/security/updates/errata/ As previously announced, the release of Red Hat Enterprise Linux 4.8 will mark the beginning of Production 2 phase of the Red Hat Enterprise Linux 4. No new hardware enablement will be expected during this phase. https://www.redhat.com/archives/nahant-list/2008-July/msg00059.html Customers should note that their subscriptions provide access to all currently supported versions of Red Hat Enterprise Linux.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/4.8_release_notes/s1-overview
|
Chapter 4. The Persistence SPI
|
Chapter 4. The Persistence SPI In Red Hat JBoss Data Grid, persistence can configure external (persistent) storage engines. These storage engines complement JBoss Data Grid's default in-memory storage. Persistent external storage provides several benefits: Memory is volatile and a cache store can increase the life span of the information in the cache, which results in improved durability. Using persistent external stores as a caching layer between an application and a custom storage engine provides improved Write-Through functionality. Using a combination of eviction and passivation, only the frequently required information is stored in-memory and other data is stored in the external storage. Report a bug 4.1. Persistence SPI Benefits The Red Hat JBoss Data Grid implementation of the Persistence SPI offers the following benefits: Alignment with JSR-107 ( http://jcp.org/en/jsr/detail?id=107 ). JBoss Data Grid's CacheWriter and CacheLoader interfaces are similar to the JSR-107 writer and reader. As a result, alignment with JSR-107 provides improved portability for stores across JCache-compliant vendors. Simplified transaction integration. JBoss Data Grid handles locking automatically and so implementations do not have to coordinate concurrent access to the store. Depending on the locking mode, concurrent writes on the same key may not occur. However, implementors expect operations on the store to originate from multiple threads and add the implementation code accordingly. Reduced serialization, resulting in reduced CPU usage. The new SPI exposes stored entries in a serialized format. If an entry is fetched from persistent storage to be sent remotely, it does not need to be deserialized (when reading from the store) and then serialized again (when writing to the wire). Instead, the entry is written to the wire in the serialized format as fetched from the storage. Report a bug
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/chap-the_persistence_spi
|
20.6. Explanation of Quota Threshold Settings
|
20.6. Explanation of Quota Threshold Settings Table 20.3. Quota thresholds and grace Setting Definition Cluster Threshold The amount of cluster resources available per data center. Cluster Grace The amount of the cluster available for the data center after exhausting the data center's Cluster Threshold. Storage Threshold The amount of storage resources available per data center. Storage Grace The amount of storage available for the data center after exhausting the data center's Storage Threshold. If a quota is set to 100 GB with 20% Grace, then consumers are blocked from using storage after they use 120 GB of storage. If the same quota has a Threshold set at 70%, then consumers receive a warning when they exceed 70 GB of storage consumption (but they remain able to consume storage until they reach 120 GB of storage consumption.) Both "Threshold" and "Grace" are set relative to the quota. "Threshold" may be thought of as the "soft limit", and exceeding it generates a warning. "Grace" may be thought of as the "hard limit", and exceeding it makes it impossible to consume any more storage resources.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/explanation_of_quota_threshold_settings
|
8.3.6. Backup Site Staffing
|
8.3.6. Backup Site Staffing The problem of staffing a backup site is multi-dimensional. One aspect of the problem is determining the staffing required to run the backup data center for as long as necessary. While a skeleton crew may be able to keep things going for a short period of time, as the disaster drags on more people will be required to maintain the effort needed to run under the extraordinary circumstances surrounding a disaster. This includes ensuring that personnel have sufficient time off to unwind and possibly travel back to their homes. If the disaster was wide-ranging enough to affect peoples' homes and families, additional time must be allotted to allow them to manage their own disaster recovery. Temporary lodging near the backup site is necessary, along with the transportation required to get people to and from the backup site and their lodgings. Often a disaster recovery plan includes on-site representative staff from all parts of the organization's user community. This depends on the ability of your organization to operate with a remote data center. If user representatives must work at the backup site, similar accommodations must be made available for them, as well.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-disaster-recovery-staffing
|
B.55. net-snmp
|
B.55. net-snmp B.55.1. RHBA-2010:0901 - net-snmp bug fix update Updated net-snmp packages that resolve several issues are now available for Red Hat Enterprise Linux 6. The net-snmp packages provide various libraries and tools for the Simple Network Management Protocol (SNMP), including an SNMP library, an extensible agent, tools for requesting or setting information from SNMP agents, tools for generating and handling SNMP traps, a version of the netstat command which uses SNMP, and a Tk/Perl MIB browser. Bug Fixes BZ# 652223 The SNMP daemon, snmpd, returned the incorrect value of either "0.1" or 1.3" for sysObjectID. This update fixes the value of this OID so that the correct value, which is "1.3.6.1.4.1.8072.3.2.10", is now returned. BZ# 652551 Under certain conditions, and especially on networks with high traffic, snmpd wrote a lot of "c64 32 bit check failed" and "netsnmp_assert 1 == new_val->high failed" messages to the system log. Although these messages are harmless and not indicative of a serious error, they could potentially fill the system log quickly. This update suppresses these spurious messages in favor of more meaningful and specific error messages, which are written to the system log only once. All users of net-snmp are advised to upgrade to these updated packages, which resolve these issues.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/net-snmp
|
8.5. Viewing Tokens
|
8.5. Viewing Tokens To view a list of the tokens currently installed for a Certificate System instance, use the modutil utility. Change to the instance alias directory. For example: Show the information about the installed PKCS #11 modules installed as well as information on the corresponding tokens using the modutil tool.
|
[
"cd /var/lib/pki/pki-tomcat/alias",
"modutil -dbdir . -nocertdb -list"
] |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/viewing_tokens
|
Data Grid downloads
|
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
| null |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_memcached_protocol_endpoint_with_data_grid/rhdg-downloads_datagrid
|
Operator Guide
|
Operator Guide Red Hat build of Keycloak 26.0 Red Hat Customer Content Services
|
[
"apiVersion: apps/v1 kind: StatefulSet metadata: name: postgresql-db spec: serviceName: postgresql-db-service selector: matchLabels: app: postgresql-db replicas: 1 template: metadata: labels: app: postgresql-db spec: containers: - name: postgresql-db image: postgres:15 volumeMounts: - mountPath: /data name: cache-volume env: - name: POSTGRES_USER value: testuser - name: POSTGRES_PASSWORD value: testpassword - name: PGDATA value: /data/pgdata - name: POSTGRES_DB value: keycloak volumes: - name: cache-volume emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: postgres-db spec: selector: app: postgresql-db type: LoadBalancer ports: - port: 5432 targetPort: 5432",
"apply -f example-postgres.yaml",
"openssl req -subj '/CN=test.keycloak.org/O=Test Keycloak./C=US' -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem",
"create secret tls example-tls-secret --cert certificate.pem --key key.pem",
"create secret generic keycloak-db-secret --from-literal=username=[your_database_username] --from-literal=password=[your_database_password]",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 db: vendor: postgres host: postgres-db usernameSecret: name: keycloak-db-secret key: username passwordSecret: name: keycloak-db-secret key: password http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org proxy: headers: xforwarded # double check your reverse proxy sets and overwrites the X-Forwarded-* headers",
"apply -f example-kc.yaml",
"get keycloaks/example-kc -o go-template='{{range .status.conditions}}CONDITION: {{.type}}{{\"\\n\"}} STATUS: {{.status}}{{\"\\n\"}} MESSAGE: {{.message}}{{\"\\n\"}}{{end}}'",
"CONDITION: Ready STATUS: true MESSAGE: CONDITION: HasErrors STATUS: false MESSAGE: CONDITION: RollingUpdate STATUS: false MESSAGE:",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: className: openshift-default",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: ingress: enabled: false",
"apply -f example-kc.yaml",
"oc create route reencrypt --service=<keycloak-cr-name>-service --cert=<configured-certificate> --key=<certificate-key> --dest-ca-cert=<ca-certificate> --ca-cert=<ca-certificate> --hostname=<hostname>",
"port-forward service/example-kc-service 8443:8443",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: proxy: headers: forwarded|xforwarded",
"get secret example-kc-initial-admin -o jsonpath='{.data.username}' | base64 --decode get secret example-kc-initial-admin -o jsonpath='{.data.password}' | base64 --decode",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> realm:",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> realm: id: example-realm realm: example-realm displayName: ExampleRealm enabled: true",
"apply -f example-realm-import.yaml",
"get keycloakrealmimports/my-realm-kc -o go-template='{{range .status.conditions}}CONDITION: {{.type}}{{\"\\n\"}} STATUS: {{.status}}{{\"\\n\"}} MESSAGE: {{.message}}{{\"\\n\"}}{{end}}'",
"CONDITION: Done STATUS: true MESSAGE: CONDITION: Started STATUS: false MESSAGE: CONDITION: HasErrors STATUS: false MESSAGE:",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: KeycloakRealmImport metadata: name: my-realm-kc spec: keycloakCRName: <name of the keycloak CR> placeholders: ENV_KEY: secret: name: SECRET_NAME key: SECRET_KEY",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: db: vendor: postgres usernameSecret: name: usernameSecret key: usernameSecretKey passwordSecret: name: passwordSecret key: passwordSecretKey host: host database: database port: 123 schema: schema poolInitialSize: 1 poolMinSize: 2 poolMaxSize: 3 http: httpEnabled: true httpPort: 8180 httpsPort: 8543 tlsSecret: my-tls-secret hostname: hostname: https://my-hostname.tld admin: https://my-hostname.tld/admin strict: false backchannelDynamic: true features: enabled: - docker - authorization disabled: - admin - step-up-authentication transaction: xaEnabled: false",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: additionalOptions: - name: spi-connections-http-client-default-connection-pool-size secret: # Secret reference name: http-client-secret # name of the Secret key: poolSize # name of the Key in the Secret - name: spi-email-template-mycustomprovider-enabled value: true # plain text value",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: unsupported: podTemplate: metadata: labels: my-label: \"keycloak\" spec: containers: - volumeMounts: - name: test-volume mountPath: /mnt/test volumes: - name: test-volume secret: secretName: keycloak-additional-secret",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: http: httpEnabled: true hostname: strict: false",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: resources: requests: cpu: 1200m memory: 896Mi limits: cpu: 6 memory: 3Gi",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: scheduling: priorityClassName: custom-high affinity: podAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app: keycloak app.kubernetes.io/managed-by: keycloak-operator app.kubernetes.io/component: server topologyKey: topology.kubernetes.io/zone weight: 10 tolerations: - key: \"some-taint\" operator: \"Exists\" effect: \"NoSchedule\" topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: DoNotSchedule",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: httpManagement: port: 9001 additionalOptions: - name: http-management-relative-path value: /management",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: truststores: my-truststore: secret: name: my-secret",
"apiVersion: v1 kind: Secret metadata: name: my-secret stringData: cert.pem: | -----BEGIN CERTIFICATE-----",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 image: quay.io/my-company/my-keycloak:latest http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org",
"apiVersion: k8s.keycloak.org/v2alpha1 kind: Keycloak metadata: name: example-kc spec: instances: 1 image: quay.io/my-company/my-keycloak:latest startOptimized: false http: tlsSecret: example-tls-secret hostname: hostname: test.keycloak.org"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html-single/operator_guide//~
|
Chapter 14. Troubleshooting hosted control planes
|
Chapter 14. Troubleshooting hosted control planes If you encounter issues with hosted control planes, see the following information to guide you through troubleshooting. 14.1. Gathering information to troubleshoot hosted control planes When you need to troubleshoot an issue with hosted clusters, you can gather information by running the must-gather command. The command generates output for the management cluster and the hosted cluster. The output for the management cluster contains the following content: Cluster-scoped resources: These resources are node definitions of the management cluster. The hypershift-dump compressed file: This file is useful if you need to share the content with other people. Namespaced resources: These resources include all of the objects from the relevant namespaces, such as config maps, services, events, and logs. Network logs: These logs include the OVN northbound and southbound databases and the status for each one. Hosted clusters: This level of output involves all of the resources inside of the hosted cluster. The output for the hosted cluster contains the following content: Cluster-scoped resources: These resources include all of the cluster-wide objects, such as nodes and CRDs. Namespaced resources: These resources include all of the objects from the relevant namespaces, such as config maps, services, events, and logs. Although the output does not contain any secret objects from the cluster, it can contain references to the names of secrets. Prerequisites You must have cluster-admin access to the management cluster. You need the name value for the HostedCluster resource and the namespace where the CR is deployed. You must have the hcp command-line interface installed. For more information, see "Installing the hosted control planes command-line interface". You must have the OpenShift CLI ( oc ) installed. You must ensure that the kubeconfig file is loaded and is pointing to the management cluster. Procedure To gather the output for troubleshooting, enter the following command: USD oc adm must-gather \ --image=registry.redhat.io/multicluster-engine/must-gather-rhel9:v<mce_version> \ /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE \ hosted-cluster-name=HOSTEDCLUSTERNAME \ --dest-dir=NAME ; tar -cvzf NAME.tgz NAME where: You replace <mce_version> with the version of multicluster engine Operator that you are using; for example, 2.6 . The hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE parameter is optional. If you do not include it, the command runs as though the hosted cluster is in the default namespace, which is clusters . If you want to save the results of the command to a compressed file, specify the --dest-dir=NAME parameter and replace NAME with the name of the directory where you want to save the results. Additional resources Installing the hosted control planes command-line interface 14.2. Entering the must-gather command in a disconnected environment Complete the following steps to run the must-gather command in a disconnected environment. Procedure In a disconnected environment, mirror the Red Hat operator catalog images into their mirror registry. For more information, see Install on disconnected networks . Run the following command to extract logs, which reference the image from their mirror registry: REGISTRY=registry.example.com:5000 IMAGE=USDREGISTRY/multicluster-engine/must-gather-rhel8@sha256:ff9f37eb400dc1f7d07a9b6f2da9064992934b69847d17f59e385783c071b9d8 USD oc adm must-gather \ --image=USDIMAGE /usr/bin/gather \ hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE \ hosted-cluster-name=HOSTEDCLUSTERNAME \ --dest-dir=./data Additional resources Install on disconnected networks 14.3. Troubleshooting hosted clusters on OpenShift Virtualization When you troubleshoot a hosted cluster on OpenShift Virtualization, start with the top-level HostedCluster and NodePool resources and then work down the stack until you find the root cause. The following steps can help you discover the root cause of common issues. 14.3.1. HostedCluster resource is stuck in a partial state If a hosted control plane is not coming fully online because a HostedCluster resource is pending, identify the problem by checking prerequisites, resource conditions, and node and Operator status. Procedure Ensure that you meet all of the prerequisites for a hosted cluster on OpenShift Virtualization. View the conditions on the HostedCluster and NodePool resources for validation errors that prevent progress. By using the kubeconfig file of the hosted cluster, inspect the status of the hosted cluster: View the output of the oc get clusteroperators command to see which cluster Operators are pending. View the output of the oc get nodes command to ensure that worker nodes are ready. 14.3.2. No worker nodes are registered If a hosted control plane is not coming fully online because the hosted control plane has no worker nodes registered, identify the problem by checking the status of various parts of the hosted control plane. Procedure View the HostedCluster and NodePool conditions for failures that indicate what the problem might be. Enter the following command to view the KubeVirt worker node virtual machine (VM) status for the NodePool resource: USD oc get vm -n <namespace> If the VMs are stuck in the provisioning state, enter the following command to view the CDI import pods within the VM namespace for clues about why the importer pods have not completed: USD oc get pods -n <namespace> | grep "import" If the VMs are stuck in the starting state, enter the following command to view the status of the virt-launcher pods: USD oc get pods -n <namespace> -l kubevirt.io=virt-launcher If the virt-launcher pods are in a pending state, investigate why the pods are not being scheduled. For example, not enough resources might exist to run the virt-launcher pods. If the VMs are running but they are not registered as worker nodes, use the web console to gain VNC access to one of the affected VMs. The VNC output indicates whether the ignition configuration was applied. If a VM cannot access the hosted control plane ignition server on startup, the VM cannot be provisioned correctly. If the ignition configuration was applied but the VM is still not registering as a node, see Identifying the problem: Access the VM console logs to learn how to access the VM console logs during startup. Additional resources Identifying the problem: Access the VM console logs 14.3.3. Worker nodes are stuck in the NotReady state During cluster creation, nodes enter the NotReady state temporarily while the networking stack is rolled out. This part of the process is normal. However, if this part of the process takes longer than 15 minutes, an issue might have occurred. Procedure Identify the problem by investigating the node object and pods: Enter the following command to view the conditions on the node object and determine why the node is not ready: USD oc get nodes -o yaml Enter the following command to look for failing pods within the cluster: USD oc get pods -A --field-selector=status.phase!=Running,status,phase!=Succeeded 14.3.4. Ingress and console cluster operators are not coming online If a hosted control plane is not coming fully online because the Ingress and console cluster Operators are not online, check the wildcard DNS routes and load balancer. Procedure If the cluster uses the default Ingress behavior, enter the following command to ensure that wildcard DNS routes are enabled on the OpenShift Container Platform cluster that the virtual machines (VMs) are hosted on: USD oc patch ingresscontroller -n openshift-ingress-operator \ default --type=json -p \ '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]' If you use a custom base domain for the hosted control plane, complete the following steps: Ensure that the load balancer is targeting the VM pods correctly. Ensure that the wildcard DNS entry is targeting the load balancer IP address. 14.3.5. Load balancer services for the hosted cluster are not available If a hosted control plane is not coming fully online because the load balancer services are not becoming available, check events, details, and the Kubernetes Cluster Configuration Manager (KCCM) pod. Procedure Look for events and details that are associated with the load balancer service within the hosted cluster. By default, load balancers for the hosted cluster are handled by the kubevirt-cloud-controller-manager within the hosted control plane namespace. Ensure that the KCCM pod is online and view its logs for errors or warnings. To identify the KCCM pod in the hosted control plane namespace, enter the following command: USD oc get pods -n <hosted_control_plane_namespace> \ -l app=cloud-controller-manager 14.3.6. Hosted cluster PVCs are not available If a hosted control plane is not coming fully online because the persistent volume claims (PVCs) for a hosted cluster are not available, check the PVC events and details, and component logs. Procedure Look for events and details that are associated with the PVC to understand which errors are occurring. If a PVC is failing to attach to a pod, view the logs for the kubevirt-csi-node daemonset component within the hosted cluster to further investigate the problem. To identify the kubevirt-csi-node pods for each node, enter the following command: USD oc get pods -n openshift-cluster-csi-drivers -o wide \ -l app=kubevirt-csi-driver If a PVC cannot bind to a persistent volume (PV), view the logs of the kubevirt-csi-controller component within the hosted control plane namespace. To identify the kubevirt-csi-controller pod within the hosted control plane namespace, enter the following command: USD oc get pods -n <hcp namespace> -l app=kubevirt-csi-driver 14.3.7. VM nodes are not correctly joining the cluster If a hosted control plane is not coming fully online because the VM nodes are not correctly joining the cluster, access the VM console logs. Procedure To access the VM console logs, complete the steps in How to get serial console logs for VMs part of OpenShift Virtualization Hosted Control Plane clusters . 14.3.8. RHCOS image mirroring fails For hosted control planes on OpenShift Virtualization in a disconnected environment, oc-mirror fails to automatically mirror the Red Hat Enterprise Linux CoreOS (RHCOS) image to the internal registry. When you create your first hosted cluster, the Kubevirt virtual machine does not boot, because the boot image is not available in the internal registry. To resolve this issue, manually mirror the RHCOS image to the internal registry. Procedure Get the internal registry name by running the following command: USD oc get imagecontentsourcepolicy -o json \ | jq -r '.items[].spec.repositoryDigestMirrors[0].mirrors[0]' Get a payload image by running the following command: USD oc get clusterversion version -ojsonpath='{.status.desired.image}' Extract the 0000_50_installer_coreos-bootimages.yaml file that contains boot images from your payload image on the hosted cluster. Replace <payload_image> with the name of your payload image. Run the following command: USD oc image extract \ --file /release-manifests/0000_50_installer_coreos-bootimages.yaml \ <payload_image> --confirm Get the RHCOS image by running the following command: USD cat 0000_50_installer_coreos-bootimages.yaml | yq -r .data.stream \ | jq -r '.architectures.x86_64.images.kubevirt."digest-ref"' Mirror the RHCOS image to your internal registry. Replace <rhcos_image> with your RHCOS image; for example, quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d9643ead36b1c026be664c9c65c11433c6cdf71bfd93ba229141d134a4a6dd94 . Replace <internal_registry> with the name of your internal registry; for example, virthost.ostest.test.metalkube.org:5000/localimages/ocp-v4.0-art-dev . Run the following command: USD oc image mirror <rhcos_image> <internal_registry> Create a YAML file named rhcos-boot-kubevirt.yaml that defines the ImageDigestMirrorSet object. See the following example configuration: apiVersion: config.openshift.io/v1 kind: ImageDigestMirrorSet metadata: name: rhcos-boot-kubevirt spec: repositoryDigestMirrors: - mirrors: - <rhcos_image_no_digest> 1 source: virthost.ostest.test.metalkube.org:5000/localimages/ocp-v4.0-art-dev 2 1 Specify your RHCOS image without its digest, for example, quay.io/openshift-release-dev/ocp-v4.0-art-dev . 2 Specify the name of your internal registry, for example, virthost.ostest.test.metalkube.org:5000/localimages/ocp-v4.0-art-dev . Apply the rhcos-boot-kubevirt.yaml file to create the ImageDigestMirrorSet object by running the following command: USD oc apply -f rhcos-boot-kubevirt.yaml 14.3.9. Return non-bare-metal clusters to the late binding pool If you are using late binding managed clusters without BareMetalHosts , you must complete additional manual steps to delete a late binding cluster and return the nodes back to the Discovery ISO. For late binding managed clusters without BareMetalHosts , removing cluster information does not automatically return all nodes to the Discovery ISO. Procedure To unbind the non-bare-metal nodes with late binding, complete the following steps: Remove the cluster information. For more information, see Removing a cluster from management . Clean the root disks. Reboot manually with the Discovery ISO. Additional resources Removing a cluster from management 14.4. Troubleshooting hosted clusters on bare metal The following information applies to troubleshooting hosted control planes on bare metal. 14.4.1. Nodes fail to be added to hosted control planes on bare metal When you scale up a hosted control planes cluster with nodes that were provisioned by using Assisted Installer, the host fails to pull the ignition with a URL that contains port 22642. That URL is invalid for hosted control planes and indicates that an issue exists with the cluster. Procedure To determine the issue, review the assisted-service logs: USD oc logs -n multicluster-engine <assisted_service_pod_name> 1 1 Specify the Assisted Service pod name. In the logs, find errors that resemble these examples: error="failed to get pull secret for update: invalid pull secret data in secret pull-secret" pull secret must contain auth for \"registry.redhat.io\" To fix this issue, see "Add the pull secret to the namespace" in the multicluster engine for Kubernetes Operator documentation. Note To use hosted control planes, you must have multicluster engine Operator installed, either as a standalone operator or as part of Red Hat Advanced Cluster Management. Because the operator has a close association with Red Hat Advanced Cluster Management, the documentation for the operator is published within that product's documentation. Even if you do not use Red Hat Advanced Cluster Management, the parts of its documentation that cover multicluster engine Operator are relevant to hosted control planes. Additional resources Add the pull secret to the namespace 14.5. Restarting hosted control plane components If you are an administrator for hosted control planes, you can use the hypershift.openshift.io/restart-date annotation to restart all control plane components for a particular HostedCluster resource. For example, you might need to restart control plane components for certificate rotation. Procedure To restart a control plane, annotate the HostedCluster resource by entering the following command: USD oc annotate hostedcluster \ -n <hosted_cluster_namespace> \ <hosted_cluster_name> \ hypershift.openshift.io/restart-date=USD(date --iso-8601=seconds) 1 1 The control plane is restarted whenever the value of the annotation changes. The date command serves as the source of a unique string. The annotation is treated as a string, not a timestamp. Verification After you restart a control plane, the following hosted control planes components are typically restarted: Note You might see some additional components restarting as a side effect of changes implemented by the other components. catalog-operator certified-operators-catalog cluster-api cluster-autoscaler cluster-policy-controller cluster-version-operator community-operators-catalog control-plane-operator hosted-cluster-config-operator ignition-server ingress-operator konnectivity-agent konnectivity-server kube-apiserver kube-controller-manager kube-scheduler machine-approver oauth-openshift olm-operator openshift-apiserver openshift-controller-manager openshift-oauth-apiserver packageserver redhat-marketplace-catalog redhat-operators-catalog 14.6. Pausing the reconciliation of a hosted cluster and hosted control plane If you are a cluster instance administrator, you can pause the reconciliation of a hosted cluster and hosted control plane. You might want to pause reconciliation when you back up and restore an etcd database or when you need to debug problems with a hosted cluster or hosted control plane. Procedure To pause reconciliation for a hosted cluster and hosted control plane, populate the pausedUntil field of the HostedCluster resource. To pause the reconciliation until a specific time, enter the following command: USD oc patch -n <hosted_cluster_namespace> \ hostedclusters/<hosted_cluster_name> \ -p '{"spec":{"pausedUntil":"<timestamp>"}}' \ --type=merge 1 1 Specify a timestamp in the RFC339 format, for example, 2024-03-03T03:28:48Z . The reconciliation is paused until the specified time is passed. To pause the reconciliation indefinitely, enter the following command: USD oc patch -n <hosted_cluster_namespace> \ hostedclusters/<hosted_cluster_name> \ -p '{"spec":{"pausedUntil":"true"}}' \ --type=merge The reconciliation is paused until you remove the field from the HostedCluster resource. When the pause reconciliation field is populated for the HostedCluster resource, the field is automatically added to the associated HostedControlPlane resource. To remove the pausedUntil field, enter the following patch command: USD oc patch -n <hosted_cluster_namespace> \ hostedclusters/<hosted_cluster_name> \ -p '{"spec":{"pausedUntil":null}}' \ --type=merge 14.7. Scaling down the data plane to zero If you are not using the hosted control plane, to save the resources and cost you can scale down a data plane to zero. Note Ensure you are prepared to scale down the data plane to zero. Because the workload from the worker nodes disappears after scaling down. Procedure Set the kubeconfig file to access the hosted cluster by running the following command: USD export KUBECONFIG=<install_directory>/auth/kubeconfig Get the name of the NodePool resource associated to your hosted cluster by running the following command: USD oc get nodepool --namespace <hosted_cluster_namespace> Optional: To prevent the pods from draining, add the nodeDrainTimeout field in the NodePool resource by running the following command: USD oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace> Example output apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: # ... name: nodepool-1 namespace: clusters # ... spec: arch: amd64 clusterName: clustername 1 management: autoRepair: false replace: rollingUpdate: maxSurge: 1 maxUnavailable: 0 strategy: RollingUpdate upgradeType: Replace nodeDrainTimeout: 0s 2 # ... 1 Defines the name of your hosted cluster. 2 Specifies the total amount of time that the controller spends to drain a node. By default, the nodeDrainTimeout: 0s setting blocks the node draining process. Note To allow the node draining process to continue for a certain period of time, you can set the value of the nodeDrainTimeout field accordingly, for example, nodeDrainTimeout: 1m . Scale down the NodePool resource associated to your hosted cluster by running the following command: USD oc scale nodepool/<nodepool_name> --namespace <hosted_cluster_namespace> \ --replicas=0 Note After scaling down the data plan to zero, some pods in the control plane stay in the Pending status and the hosted control plane stays up and running. If necessary, you can scale up the NodePool resource. Optional: Scale up the NodePool resource associated to your hosted cluster by running the following command: USD oc scale nodepool/<nodepool_name> --namespace <hosted_cluster_namespace> --replicas=1 After rescaling the NodePool resource, wait for couple of minutes for the NodePool resource to become available in a Ready state. Verification Verify that the value for the nodeDrainTimeout field is greater than 0s by running the following command: USD oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -ojsonpath='{.spec.nodeDrainTimeout}' Additional resources Must-gather for a hosted cluster
|
[
"oc adm must-gather --image=registry.redhat.io/multicluster-engine/must-gather-rhel9:v<mce_version> /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=NAME ; tar -cvzf NAME.tgz NAME",
"REGISTRY=registry.example.com:5000 IMAGE=USDREGISTRY/multicluster-engine/must-gather-rhel8@sha256:ff9f37eb400dc1f7d07a9b6f2da9064992934b69847d17f59e385783c071b9d8 oc adm must-gather --image=USDIMAGE /usr/bin/gather hosted-cluster-namespace=HOSTEDCLUSTERNAMESPACE hosted-cluster-name=HOSTEDCLUSTERNAME --dest-dir=./data",
"oc get vm -n <namespace>",
"oc get pods -n <namespace> | grep \"import\"",
"oc get pods -n <namespace> -l kubevirt.io=virt-launcher",
"oc get nodes -o yaml",
"oc get pods -A --field-selector=status.phase!=Running,status,phase!=Succeeded",
"oc patch ingresscontroller -n openshift-ingress-operator default --type=json -p '[{ \"op\": \"add\", \"path\": \"/spec/routeAdmission\", \"value\": {wildcardPolicy: \"WildcardsAllowed\"}}]'",
"oc get pods -n <hosted_control_plane_namespace> -l app=cloud-controller-manager",
"oc get pods -n openshift-cluster-csi-drivers -o wide -l app=kubevirt-csi-driver",
"oc get pods -n <hcp namespace> -l app=kubevirt-csi-driver",
"oc get imagecontentsourcepolicy -o json | jq -r '.items[].spec.repositoryDigestMirrors[0].mirrors[0]'",
"oc get clusterversion version -ojsonpath='{.status.desired.image}'",
"oc image extract --file /release-manifests/0000_50_installer_coreos-bootimages.yaml <payload_image> --confirm",
"cat 0000_50_installer_coreos-bootimages.yaml | yq -r .data.stream | jq -r '.architectures.x86_64.images.kubevirt.\"digest-ref\"'",
"oc image mirror <rhcos_image> <internal_registry>",
"apiVersion: config.openshift.io/v1 kind: ImageDigestMirrorSet metadata: name: rhcos-boot-kubevirt spec: repositoryDigestMirrors: - mirrors: - <rhcos_image_no_digest> 1 source: virthost.ostest.test.metalkube.org:5000/localimages/ocp-v4.0-art-dev 2",
"oc apply -f rhcos-boot-kubevirt.yaml",
"oc logs -n multicluster-engine <assisted_service_pod_name> 1",
"error=\"failed to get pull secret for update: invalid pull secret data in secret pull-secret\"",
"pull secret must contain auth for \\\"registry.redhat.io\\\"",
"oc annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> hypershift.openshift.io/restart-date=USD(date --iso-8601=seconds) 1",
"oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"<timestamp>\"}}' --type=merge 1",
"oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":\"true\"}}' --type=merge",
"oc patch -n <hosted_cluster_namespace> hostedclusters/<hosted_cluster_name> -p '{\"spec\":{\"pausedUntil\":null}}' --type=merge",
"export KUBECONFIG=<install_directory>/auth/kubeconfig",
"oc get nodepool --namespace <hosted_cluster_namespace>",
"oc edit nodepool <nodepool_name> --namespace <hosted_cluster_namespace>",
"apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: arch: amd64 clusterName: clustername 1 management: autoRepair: false replace: rollingUpdate: maxSurge: 1 maxUnavailable: 0 strategy: RollingUpdate upgradeType: Replace nodeDrainTimeout: 0s 2",
"oc scale nodepool/<nodepool_name> --namespace <hosted_cluster_namespace> --replicas=0",
"oc scale nodepool/<nodepool_name> --namespace <hosted_cluster_namespace> --replicas=1",
"oc get nodepool -n <hosted_cluster_namespace> <nodepool_name> -ojsonpath='{.spec.nodeDrainTimeout}'"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/hosted_control_planes/troubleshooting-hosted-control-planes
|
Installing on Azure Stack Hub
|
Installing on Azure Stack Hub OpenShift Container Platform 4.17 Installing OpenShift Container Platform on Azure Stack Hub Red Hat OpenShift Documentation Team
|
[
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{\"auths\": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{\"auths\": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1",
"az cloud set -n AzureStackCloud",
"az cloud update --profile 2019-03-01-hybrid",
"az login",
"az account list --refresh",
"[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az account set -s <subscription_id> 1",
"az account show",
"{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }",
"az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3",
"Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }",
"mkdir <installation_directory>",
"compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1",
"platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4",
"apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 6 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 7 baseDomainResourceGroupName: resource_group 8 region: azure_stack_local_region 9 resourceGroupName: existing_resource_group 10 outboundType: Loadbalancer cloudName: AzureStackCloud 11 pullSecret: '{\"auths\": ...}' 12 fips: false 13 additionalTrustBundle: | 14 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 15",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5",
"export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml",
"rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}",
"spec: trustedCA: name: user-ca-bundle",
"export INFRA_ID=<infra_id> 1",
"export RESOURCE_GROUP=<resource_group> 1",
"openshift-install version",
"release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region}",
"apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: \"true\" data: disabled: \"true\"",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig",
"? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift",
"ls USDHOME/clusterconfig/openshift/",
"99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.17.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}",
"az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS",
"export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd",
"az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}",
"az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"",
"az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/azurestack/01_vnet.json[]",
"export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters storageAccount=\"USD{CLUSTER_NAME}sa\" \\ 3 --parameters architecture=\"<architecture>\" 4",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/azurestack/02_storage.json[]",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters baseName=\"USD{INFRA_ID}\" 1",
"export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`",
"export PRIVATE_IP=`az network lb frontend-ip show -g \"USDRESOURCE_GROUP\" --lb-name \"USD{INFRA_ID}-internal\" -n internal-lb-ip --query \"privateIpAddress\" -o tsv`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z \"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" -n api-int -a USD{PRIVATE_IP} --ttl 60",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/azurestack/03_infra.json[]",
"bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`",
"export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"export CA=\"data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\\n')\"",
"export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url \"USDBOOTSTRAP_URL\" --arg cert \"USDCA\" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`",
"az deployment group create --verbose -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/azurestack/04_bootstrap.json[]",
"export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/azurestack/05_masters.json[]",
"./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2",
"az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip",
"export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`",
"az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3",
"link:https://raw.githubusercontent.com/openshift/installer/release-4.17/upi/azurestack/06_workers.json[]",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3",
"oc -n openshift-ingress get service router-default",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20",
"export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300",
"oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes",
"oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: azure: osDisk: diskSizeGB:",
"compute: platform: azure: osDisk: diskType:",
"compute: platform: azure: type:",
"controlPlane: platform: azure: osDisk: diskSizeGB:",
"controlPlane: platform: azure: osDisk: diskType:",
"controlPlane: platform: azure: type:",
"platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: azure: defaultMachinePlatform: osDisk: diskType:",
"platform: azure: defaultMachinePlatform: type:",
"platform: azure: armEndpoint:",
"platform: azure: baseDomainResourceGroupName:",
"platform: azure: region:",
"platform: azure: resourceGroupName:",
"platform: azure: outboundType:",
"platform: azure: cloudName:",
"clusterOSImage:",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/installing_on_azure_stack_hub/index
|
13.6. Defining Automatic Group Membership for Users and Hosts
|
13.6. Defining Automatic Group Membership for Users and Hosts 13.6.1. How Automatic Group Membership Works in IdM 13.6.1.1. What Automatic Group Membership Is Using automatic group membership, you can assign users and hosts to groups automatically based on their attributes. For example, you can: Divide employees' user entries into groups based on the employees' manager, location, or any other attribute. Divide hosts based on their class, location, or any other attribute. Add all users or all hosts to a single global group. 13.6.1.2. Benefits of Automatic Group Membership Reduced overhead of managing group membership manually With automatic group membership, the administrator no longer assigns users and hosts to groups manually. Improved consistency in user and host management With automatic group membership, users and hosts are assigned to groups based on strictly defined and automatically evaluated criteria. Easier management of group-based settings Various settings are defined for groups and then applied to individual group members, for example sudo rules, automount , or access control. When using automatic group membership, users and hosts are automatically added to specified groups, which makes managing group-based settings easier. 13.6.1.3. Automember Rules When configuring automatic group membership, the administrator defines automember rules . An automember rule applies to a specific user or host group. It includes conditions that the user or host must meet to be included or excluded from the group: Inclusive conditions When a user or host entry meets an inclusive condition, it will be included in the group. Exclusive conditions When a user or host entry meets an exclusive condition, it will not be included in the group. The conditions are specified as regular expressions in the Perl-compatible regular expressions (PCRE) format. For more information on PCRE, see the pcresyntax (3) man page. IdM evaluates exclusive conditions before inclusive conditions. In case of a conflict, exclusive conditions take precedence over inclusive conditions. 13.6.2. Adding an Automember Rule To add an automember rule using: The IdM web UI, see the section called "Web UI: Add an Automember Rule" The command line, see the section called "Command Line: Add an Automember Rule" After you add an automember rule: All entries created in the future will become members of the specified group. If an entry meets conditions specified in multiple automember rules, it will be added to all the corresponding groups. Existing entries will not become members of the specified group. See Section 13.6.3, "Applying Automember Rules to Existing Users and Hosts" for more information. Web UI: Add an Automember Rule Select Identity Automember User group rules or Host group rules . Click Add . In the Automember rule field, select the group to which the rule will apply. Click Add and Edit . Define one or more inclusive and exclusive conditions. See Section 13.6.1.3, "Automember Rules" for details. In the Inclusive or Exclusive sections, click Add . In the Attribute field, select the required attribute. In the Expression field, define the regular expression. Click Add . For example, the following condition targets all users with any value ( .* ) in their user login attribute ( uid ). Figure 13.5. Adding Automember Rule Conditions Command Line: Add an Automember Rule Use the ipa automember-add command to add an automember rule. When prompted, specify: Automember rule , which matches the target group name. Grouping Type , which specifies whether the rule targets a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . For example, to add an automember rule for a user group named user_group : Define one or more inclusive and exclusive conditions. See Section 13.6.1.3, "Automember Rules" for details. To add a condition, use the ipa automember-add-condition command. When prompted, specify: Automember rule , which matches the target group name. Attribute Key , which specifies the entry attribute to which the filter will apply. For example, manager for users. Grouping Type , which specifies whether the rule targets a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . Inclusive regex and Exclusive regex , which specify one or more conditions as regular expressions. If you only want to specify one condition, press Enter when prompted for the other. For example, the following condition targets all users with any value ( .* ) in their user login attribute ( uid ). To remove a condition, use the ipa automember-remove-condition command. Example 13.5. Command Line: Creating an Automember Rule to Add All Entries to a Single Group By creating an inclusive condition for an attribute that all user or host entries contain, such as cn or fqdn , you can ensure that all users or hosts created in the future will be added to a single group. Create the group, such as a host group named all_hosts . See Section 13.2, "Adding and Removing User or Host Groups" . Add an automember rule for the new host group. For example: Add an inclusive condition that targets all hosts. In the following example, the inclusive condition targets hosts that have any value ( .* ) in the fqdn attribute: All hosts added in the future will automatically become members of the all_hosts group. Example 13.6. Command Line: Creating an Automember Rule for Synchronized AD Users Windows users synchronized from Active Directory (AD) share the ntUser object class. By creating an automember condition that targets all users with ntUser in their objectclass attribute, you can ensure that all synchronized AD users created in the future will be included in a common group for AD users. Create a user group for the AD users, such as ad_users . See Section 13.2, "Adding and Removing User or Host Groups" . Add an automember rule for the new user group. For example: Add an inclusive condition to filter the AD users. In the following example, the inclusive condition targets all users that have the ntUser value in the objectclass attribute: All AD users added in the future will automatically become members of the ad_users user group. 13.6.3. Applying Automember Rules to Existing Users and Hosts Automember rules apply automatically to user and hosts entries created after the rules were added. They are not applied retrospectively to entries that existed before the rules were added. To apply automember rules to entries that existed before you added the rules, manually rebuild automatic membership. Rebuilding automatic membership re-evaluates all existing automember rules and applies them either to all entries or to specific entries. Web UI: Rebuild Automatic Membership for Existing Entries To rebuild automatic membership for all users or all hosts: Select Identity Users or Hosts . Click Actions Rebuild auto membership . Figure 13.6. Rebuilding Automatic Membership for All Users or Hosts To rebuild automatic membership for a single user or host only: Select Identity Users or Hosts , and click on the required user login or host name. Click Actions Rebuild auto membership . Figure 13.7. Rebuilding Automatic Membership for a Single User or Host Command Line: Rebuild Automatic Memberhips for Existing Entries To rebuild automatic membership for all users, use the ipa automember-rebuild --type=group command: To rebuild automatic membership for all users, use the ipa automember-rebuild --type=hostgroup command. To rebuild automatic membership for a specified user or users, use the ipa automember-rebuild --users= user command: To rebuild automatic membership for a specified host or hosts, use the ipa automember-rebuild --hosts= example.com command. 13.6.4. Configuring a Default Automember Group When a default automember group is configured, user or host entries that do not match any automember rule are automatically added to the default group. Use the ipa automember-default-group-set command to configure a default automember group. When prompted, specify: Default (fallback) Group , which specifies the target group name. Grouping Type , which specifies whether the target is a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . For example: To verify that the group is set correctly, use the ipa automember-default-group-show command. The command displays the current default automember group. For example: To remove the current default automember group, use the ipa automember-default-group-remove command.
|
[
"ipa automember-add Automember Rule: user_group Grouping Type: group -------------------------------- Added automember rule \"user_group\" -------------------------------- Automember Rule: user_group",
"ipa automember-add-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ---------------------------------- Added condition(s) to \"user_group\" ---------------------------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-add Automember Rule: all_hosts Grouping Type: hostgroup ------------------------------------- Added automember rule \"all_hosts\" ------------------------------------- Automember Rule: all_hosts",
"ipa automember-add-condition Automember Rule: all_hosts Attribute Key: fqdn Grouping Type: hostgroup [Inclusive Regex]: .* [Exclusive Regex]: --------------------------------- Added condition(s) to \"all_hosts\" --------------------------------- Automember Rule: all_hosts Inclusive Regex: fqdn=.* ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-add Automember Rule: ad_users Grouping Type: group ------------------------------------- Added automember rule \"ad_users\" ------------------------------------- Automember Rule: ad_users",
"ipa automember-add-condition Automember Rule: ad_users Attribute Key: objectclass Grouping Type: group [Inclusive Regex]: ntUser [Exclusive Regex]: ------------------------------------- Added condition(s) to \"ad_users\" ------------------------------------- Automember Rule: ad_users Inclusive Regex: objectclass=ntUser ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-rebuild --type=group -------------------------------------------------------- Automember rebuild task finished. Processed (9) entries. --------------------------------------------------------",
"ipa automember-rebuild --users= user1 --users= user2 -------------------------------------------------------- Automember rebuild task finished. Processed (2) entries. --------------------------------------------------------",
"ipa automember-default-group-set Default (fallback) Group: default_user_group Grouping Type: group --------------------------------------------------- Set default (fallback) group for automember \"default_user_group\" --------------------------------------------------- Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"ipa automember-default-group-show Grouping Type: group Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/automember
|
Chapter 6. Prometheus [monitoring.coreos.com/v1]
|
Chapter 6. Prometheus [monitoring.coreos.com/v1] Description Prometheus defines a Prometheus deployment. Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of the Prometheus cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status status object Most recent observed status of the Prometheus cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 6.1.1. .spec Description Specification of the desired behavior of the Prometheus cluster. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Property Type Description additionalAlertManagerConfigs object AdditionalAlertManagerConfigs allows specifying a key of a Secret containing additional Prometheus AlertManager configurations. AlertManager configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config . As AlertManager configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade. additionalAlertRelabelConfigs object AdditionalAlertRelabelConfigs allows specifying a key of a Secret containing additional Prometheus alert relabel configurations. Alert relabel configurations specified are appended to the configurations generated by the Prometheus Operator. Alert relabel configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs . As alert relabel configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel configs are going to break Prometheus after the upgrade. additionalArgs array AdditionalArgs allows setting additional arguments for the Prometheus container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the Prometheus container which may cause issues if they are invalid or not supported by the given Prometheus version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. additionalScrapeConfigs object AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config . As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. affinity object If specified, the pod's scheduling constraints. alerting object Define details regarding alerting. allowOverlappingBlocks boolean AllowOverlappingBlocks enables vertical compaction and vertical query merge in Prometheus. This is still experimental in Prometheus so it may change in any upcoming release. apiserverConfig object APIServerConfig allows specifying a host and auth methods to access apiserver. If left empty, Prometheus is assumed to run inside of the cluster and will discover API servers automatically and use the pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/. arbitraryFSAccessThroughSMs object ArbitraryFSAccessThroughSMs configures whether configuration based on a service monitor can access arbitrary files on the file system of the Prometheus container e.g. bearer token files. baseImage string Base image to use for a Prometheus deployment. Deprecated: use 'image' instead configMaps array (string) ConfigMaps is a list of ConfigMaps in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. Each ConfigMap is added to the StatefulSet definition as a volume named configmap-<configmap-name> . The ConfigMaps are mounted into /etc/prometheus/configmaps/<configmap-name> in the 'prometheus' container. containers array Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to a Prometheus pod or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: prometheus , config-reloader , and thanos-sidecar . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. containers[] object A single application container that you want to run within a pod. disableCompaction boolean Disable prometheus compaction. enableAdminAPI boolean Enable access to prometheus web admin API. Defaults to the value of false . WARNING: Enabling the admin APIs enables mutating endpoints, to delete data, shutdown Prometheus, and more. Enabling this should be done with care and the user is advised to add additional authentication authorization via a proxy to ensure only clients authorized to perform these actions can do so. For more information see https://prometheus.io/docs/prometheus/latest/querying/api/#tsdb-admin-apis enableFeatures array (string) Enable access to Prometheus disabled features. By default, no features are enabled. Enabling disabled features is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. For more information see https://prometheus.io/docs/prometheus/latest/disabled_features/ enableRemoteWriteReceiver boolean Enable Prometheus to be used as a receiver for the Prometheus remote write protocol. Defaults to the value of false . WARNING: This is not considered an efficient way of ingesting samples. Use it with caution for specific low-volume use cases. It is not suitable for replacing the ingestion via scraping and turning Prometheus into a push-based metrics collection system. For more information see https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver Only valid in Prometheus versions 2.33.0 and newer. enforcedBodySizeLimit string EnforcedBodySizeLimit defines the maximum size of uncompressed response body that will be accepted by Prometheus. Targets responding with a body larger than this many bytes will cause the scrape to fail. Example: 100MB. If defined, the limit will apply to all service/pod monitors and probes. This is an experimental feature, this behaviour could change or be removed in the future. Only valid in Prometheus versions 2.28.0 and newer. enforcedLabelLimit integer Per-scrape limit on number of labels that will be accepted for a sample. If more than this number of labels are present post metric-relabeling, the entire scrape will be treated as failed. 0 means no limit. Only valid in Prometheus versions 2.27.0 and newer. enforcedLabelNameLengthLimit integer Per-scrape limit on length of labels name that will be accepted for a sample. If a label name is longer than this number post metric-relabeling, the entire scrape will be treated as failed. 0 means no limit. Only valid in Prometheus versions 2.27.0 and newer. enforcedLabelValueLengthLimit integer Per-scrape limit on length of labels value that will be accepted for a sample. If a label value is longer than this number post metric-relabeling, the entire scrape will be treated as failed. 0 means no limit. Only valid in Prometheus versions 2.27.0 and newer. enforcedNamespaceLabel string EnforcedNamespaceLabel If set, a label will be added to 1. all user-metrics (created by ServiceMonitor , PodMonitor and Probe objects) and 2. in all PrometheusRule objects (except the ones excluded in prometheusRulesExcludedFromEnforce ) to * alerting & recording rules and * the metrics used in their expressions ( expr ). Label name is this field's value. Label value is the namespace of the created object (mentioned above). enforcedSampleLimit integer EnforcedSampleLimit defines global limit on number of scraped samples that will be accepted. This overrides any SampleLimit set per ServiceMonitor or/and PodMonitor. It is meant to be used by admins to enforce the SampleLimit to keep overall number of samples/series under the desired limit. Note that if SampleLimit is lower that value will be taken instead. enforcedTargetLimit integer EnforcedTargetLimit defines a global limit on the number of scraped targets. This overrides any TargetLimit set per ServiceMonitor or/and PodMonitor. It is meant to be used by admins to enforce the TargetLimit to keep the overall number of targets under the desired limit. Note that if TargetLimit is lower, that value will be taken instead, except if either value is zero, in which case the non-zero value will be used. If both values are zero, no limit is enforced. evaluationInterval string Interval between consecutive evaluations. Default: 30s excludedFromEnforcement array List of references to PodMonitor, ServiceMonitor, Probe and PrometheusRule objects to be excluded from enforcing a namespace label of origin. Applies only if enforcedNamespaceLabel set to true. excludedFromEnforcement[] object ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. exemplars object Exemplars related settings that are runtime reloadable. It requires to enable the exemplar storage feature to be effective. externalLabels object (string) The labels to add to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager). externalUrl string The external URL the Prometheus instances will be available under. This is necessary to generate correct URLs. This is necessary if Prometheus is not served from root of a DNS name. hostAliases array Pods' hostAliases configuration hostAliases[] object HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. hostNetwork boolean Use the host's network namespace if true. Make sure to understand the security implications if you want to enable it. When hostNetwork is enabled, this will set dnsPolicy to ClusterFirstWithHostNet automatically. ignoreNamespaceSelectors boolean IgnoreNamespaceSelectors if set to true will ignore NamespaceSelector settings from all PodMonitor, ServiceMonitor and Probe objects. They will only discover endpoints within the namespace of the PodMonitor, ServiceMonitor and Probe objects. Defaults to false. image string Image if specified has precedence over baseImage, tag and sha combinations. Specifying the version is still necessary to ensure the Prometheus Operator knows what version of Prometheus is being configured. imagePullSecrets array An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod imagePullSecrets[] object LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. initContainers array InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Prometheus configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. initContainers[] object A single application container that you want to run within a pod. listenLocal boolean ListenLocal makes the Prometheus server listen on loopback, so that it does not bind against the Pod IP. logFormat string Log format for Prometheus to be configured with. logLevel string Log level for Prometheus to be configured with. minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) This is an alpha field and requires enabling StatefulSetMinReadySeconds feature gate. nodeSelector object (string) Define which Nodes the Pods are scheduled on. overrideHonorLabels boolean When true, Prometheus resolves label conflicts by renaming the labels in the scraped data to "exported_<label value>" for all targets created from service and pod monitors. Otherwise the HonorLabels field of the service or pod monitor applies. overrideHonorTimestamps boolean When true, Prometheus ignores the timestamps for all the targets created from service and pod monitors. Otherwise the HonorTimestamps field of the service or pod monitor applies. paused boolean When a Prometheus deployment is paused, no actions except for deletion will be performed on the underlying objects. podMetadata object PodMetadata configures Labels and Annotations which are propagated to the prometheus pods. podMonitorNamespaceSelector object Namespace's labels to match for PodMonitor discovery. If nil, only check own namespace. podMonitorSelector object Experimental PodMonitors to be selected for target discovery. Deprecated: if neither this nor serviceMonitorSelector are specified, configuration is unmanaged. portName string Port name used for the pods and governing service. This defaults to web priorityClassName string Priority class assigned to the Pods probeNamespaceSelector object Experimental Namespaces to be selected for Probe discovery. If nil, only check own namespace. probeSelector object Experimental Probes to be selected for target discovery. prometheusExternalLabelName string Name of Prometheus external label used to denote Prometheus instance name. Defaults to the value of prometheus . External label will not be added when value is set to empty string ( "" ). prometheusRulesExcludedFromEnforce array PrometheusRulesExcludedFromEnforce - list of prometheus rules to be excluded from enforcing of adding namespace labels. Works only if enforcedNamespaceLabel set to true. Make sure both ruleNamespace and ruleName are set for each pair. Deprecated: use excludedFromEnforcement instead. prometheusRulesExcludedFromEnforce[] object PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. query object QuerySpec defines the query command line flags when starting Prometheus. queryLogFile string QueryLogFile specifies the file to which PromQL queries are logged. If the filename has an empty path, e.g. 'query.log', prometheus-operator will mount the file into an emptyDir volume at /var/log/prometheus . If a full path is provided, e.g. /var/log/prometheus/query.log, you must mount a volume in the specified directory and it must be writable. This is because the prometheus container runs with a read-only root filesystem for security reasons. Alternatively, the location can be set to a stdout location such as /dev/stdout to log query information to the default Prometheus log stream. This is only available in versions of Prometheus >= 2.16.0. For more details, see the Prometheus docs ( https://prometheus.io/docs/guides/query-log/ ) remoteRead array remoteRead is the list of remote read configurations. remoteRead[] object RemoteReadSpec defines the configuration for Prometheus to read back samples from a remote endpoint. remoteWrite array remoteWrite is the list of remote write configurations. remoteWrite[] object RemoteWriteSpec defines the configuration to write samples from Prometheus to a remote endpoint. replicaExternalLabelName string Name of Prometheus external label used to denote replica name. Defaults to the value of prometheus_replica . External label will not be added when value is set to empty string ( "" ). replicas integer Number of replicas of each shard to deploy for a Prometheus deployment. Number of replicas multiplied by shards is the total number of Pods created. resources object Define resources requests and limits for single Pods. retention string Time duration Prometheus shall retain data for. Default is '24h' if retentionSize is not set, and must match the regular expression [0-9]+(ms|s|m|h|d|w|y) (milliseconds seconds minutes hours days weeks years). retentionSize string Maximum amount of disk space used by blocks. routePrefix string The route prefix Prometheus registers HTTP handlers for. This is useful, if using ExternalURL and a proxy is rewriting HTTP routes of a request, and the actual ExternalURL is still true, but the server serves requests under a different route prefix. For example for use with kubectl proxy . ruleNamespaceSelector object Namespaces to be selected for PrometheusRules discovery. If unspecified, only the same namespace as the Prometheus object is in is used. ruleSelector object A selector to select which PrometheusRules to mount for loading alerting/recording rules from. Until (excluding) Prometheus Operator v0.24.0 Prometheus Operator will migrate any legacy rule ConfigMaps to PrometheusRule custom resources selected by RuleSelector. Make sure it does not match any config maps that you do not want to be migrated. rules object /--rules.*/ command-line arguments. scrapeInterval string Interval between consecutive scrapes. Default: 30s scrapeTimeout string Number of seconds to wait for target to respond before erroring. secrets array (string) Secrets is a list of Secrets in the same namespace as the Prometheus object, which shall be mounted into the Prometheus Pods. Each Secret is added to the StatefulSet definition as a volume named secret-<secret-name> . The Secrets are mounted into /etc/prometheus/secrets/<secret-name> in the 'prometheus' container. securityContext object SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. serviceAccountName string ServiceAccountName is the name of the ServiceAccount to use to run the Prometheus Pods. serviceMonitorNamespaceSelector object Namespace's labels to match for ServiceMonitor discovery. If nil, only check own namespace. serviceMonitorSelector object ServiceMonitors to be selected for target discovery. Deprecated: if neither this nor podMonitorSelector are specified, configuration is unmanaged. sha string SHA of Prometheus container image to be deployed. Defaults to the value of version . Similar to a tag, but the SHA explicitly deploys an immutable container image. Version and Tag are ignored if SHA is set. Deprecated: use 'image' instead. The image digest can be specified as part of the image URL. shards integer EXPERIMENTAL: Number of shards to distribute targets onto. Number of replicas multiplied by shards is the total number of Pods created. Note that scaling down shards will not reshard data onto remaining instances, it must be manually moved. Increasing shards will not reshard data either but it will continue to be available from the same instances. To query globally use Thanos sidecar and Thanos querier or remote write data to a central location. Sharding is done on the content of the address target meta-label. storage object Storage spec to specify how storage shall be used. tag string Tag of Prometheus container image to be deployed. Defaults to the value of version . Version is ignored if Tag is set. Deprecated: use 'image' instead. The image tag can be specified as part of the image URL. thanos object Thanos configuration allows configuring various aspects of a Prometheus server in a Thanos environment. This section is experimental, it may change significantly without deprecation notice in any release. This is experimental and may change significantly without backward compatibility in any release. tolerations array If specified, the pod's tolerations. tolerations[] object The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. topologySpreadConstraints array If specified, the pod's topology spread constraints. topologySpreadConstraints[] object TopologySpreadConstraint specifies how to spread matching pods among the given topology. tsdb object Defines the runtime reloadable configuration of the timeseries database (TSDB). version string Version of Prometheus to be deployed. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the prometheus container, that are generated as a result of StorageSpec objects. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. volumes array Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. volumes[] object Volume represents a named volume in a pod that may be accessed by any container in the pod. walCompression boolean Enable compression of the write-ahead log using Snappy. This flag is only available in versions of Prometheus >= 2.11.0. web object Defines the web command line flags when starting Prometheus. 6.1.2. .spec.additionalAlertManagerConfigs Description AdditionalAlertManagerConfigs allows specifying a key of a Secret containing additional Prometheus AlertManager configurations. AlertManager configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alertmanager_config . As AlertManager configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible AlertManager configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.3. .spec.additionalAlertRelabelConfigs Description AdditionalAlertRelabelConfigs allows specifying a key of a Secret containing additional Prometheus alert relabel configurations. Alert relabel configurations specified are appended to the configurations generated by the Prometheus Operator. Alert relabel configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#alert_relabel_configs . As alert relabel configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible alert relabel configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.4. .spec.additionalArgs Description AdditionalArgs allows setting additional arguments for the Prometheus container. It is intended for e.g. activating hidden flags which are not supported by the dedicated configuration options yet. The arguments are passed as-is to the Prometheus container which may cause issues if they are invalid or not supported by the given Prometheus version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument the reconciliation will fail and an error will be logged. Type array 6.1.5. .spec.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 6.1.6. .spec.additionalScrapeConfigs Description AdditionalScrapeConfigs allows specifying a key of a Secret containing additional Prometheus scrape configurations. Scrape configurations specified are appended to the configurations generated by the Prometheus Operator. Job configurations specified must have the form as specified in the official Prometheus documentation: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config . As scrape configs are appended, the user is responsible to make sure it is valid. Note that using this feature may expose the possibility to break upgrades of Prometheus. It is advised to review Prometheus release notes to ensure that no incompatible scrape configs are going to break Prometheus after the upgrade. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.7. .spec.affinity Description If specified, the pod's scheduling constraints. Type object Property Type Description nodeAffinity object Describes node affinity scheduling rules for the pod. podAffinity object Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). 6.1.8. .spec.affinity.nodeAffinity Description Describes node affinity scheduling rules for the pod. Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). requiredDuringSchedulingIgnoredDuringExecution object If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. 6.1.9. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. Type array 6.1.10. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). Type object Required preference weight Property Type Description preference object A node selector term, associated with the corresponding weight. weight integer Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. 6.1.11. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference Description A node selector term, associated with the corresponding weight. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 6.1.12. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions Description A list of node selector requirements by node's labels. Type array 6.1.13. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 6.1.14. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields Description A list of node selector requirements by node's fields. Type array 6.1.15. .spec.affinity.nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution[].preference.matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 6.1.16. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. Type object Required nodeSelectorTerms Property Type Description nodeSelectorTerms array Required. A list of node selector terms. The terms are ORed. nodeSelectorTerms[] object A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. 6.1.17. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms Description Required. A list of node selector terms. The terms are ORed. Type array 6.1.18. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[] Description A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. Type object Property Type Description matchExpressions array A list of node selector requirements by node's labels. matchExpressions[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchFields array A list of node selector requirements by node's fields. matchFields[] object A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. 6.1.19. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions Description A list of node selector requirements by node's labels. Type array 6.1.20. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchExpressions[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 6.1.21. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields Description A list of node selector requirements by node's fields. Type array 6.1.22. .spec.affinity.nodeAffinity.requiredDuringSchedulingIgnoredDuringExecution.nodeSelectorTerms[].matchFields[] Description A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string The label key that the selector applies to. operator string Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. values array (string) An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. 6.1.23. .spec.affinity.podAffinity Description Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 6.1.24. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 6.1.25. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 6.1.26. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 6.1.27. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.28. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.29. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.30. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.31. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.32. .spec.affinity.podAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.33. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 6.1.34. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 6.1.35. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.36. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.37. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.38. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.39. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.40. .spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.41. .spec.affinity.podAntiAffinity Description Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). Type object Property Type Description preferredDuringSchedulingIgnoredDuringExecution array The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. preferredDuringSchedulingIgnoredDuringExecution[] object The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) requiredDuringSchedulingIgnoredDuringExecution array If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. requiredDuringSchedulingIgnoredDuringExecution[] object Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running 6.1.42. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution Description The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. Type array 6.1.43. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[] Description The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) Type object Required podAffinityTerm weight Property Type Description podAffinityTerm object Required. A pod affinity term, associated with the corresponding weight. weight integer weight associated with matching the corresponding podAffinityTerm, in the range 1-100. 6.1.44. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm Description Required. A pod affinity term, associated with the corresponding weight. Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 6.1.45. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.46. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.47. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.48. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.49. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.50. .spec.affinity.podAntiAffinity.preferredDuringSchedulingIgnoredDuringExecution[].podAffinityTerm.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.51. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution Description If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. Type array 6.1.52. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[] Description Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key <topologyKey> matches that of any node on which a pod of the set of pods is running Type object Required topologyKey Property Type Description labelSelector object A label query over a set of resources, in this case pods. namespaceSelector object A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. namespaces array (string) namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". topologyKey string This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. 6.1.53. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector Description A label query over a set of resources, in this case pods. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.54. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.55. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.56. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector Description A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.57. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.58. .spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution[].namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.59. .spec.alerting Description Define details regarding alerting. Type object Required alertmanagers Property Type Description alertmanagers array AlertmanagerEndpoints Prometheus should fire alerts against. alertmanagers[] object AlertmanagerEndpoints defines a selection of a single Endpoints object containing alertmanager IPs to fire alerts against. 6.1.60. .spec.alerting.alertmanagers Description AlertmanagerEndpoints Prometheus should fire alerts against. Type array 6.1.61. .spec.alerting.alertmanagers[] Description AlertmanagerEndpoints defines a selection of a single Endpoints object containing alertmanager IPs to fire alerts against. Type object Required name namespace port Property Type Description apiVersion string Version of the Alertmanager API that Prometheus uses to send alerts. It can be "v1" or "v2". authorization object Authorization section for this alertmanager endpoint bearerTokenFile string BearerTokenFile to read from filesystem to use when authenticating to Alertmanager. name string Name of Endpoints object in Namespace. namespace string Namespace of Endpoints object. pathPrefix string Prefix for the HTTP path alerts are pushed to. port integer-or-string Port the Alertmanager API is exposed on. scheme string Scheme to use when firing alerts. timeout string Timeout is a per-target Alertmanager timeout when pushing alerts. tlsConfig object TLS Config to use for alertmanager connection. 6.1.62. .spec.alerting.alertmanagers[].authorization Description Authorization section for this alertmanager endpoint Type object Property Type Description credentials object The secret's key that contains the credentials of the request type string Set the authentication type. Defaults to Bearer, Basic will cause an error 6.1.63. .spec.alerting.alertmanagers[].authorization.credentials Description The secret's key that contains the credentials of the request Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.64. .spec.alerting.alertmanagers[].tlsConfig Description TLS Config to use for alertmanager connection. Type object Property Type Description ca object Struct containing the CA cert to use for the targets. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Struct containing the client cert file for the targets. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 6.1.65. .spec.alerting.alertmanagers[].tlsConfig.ca Description Struct containing the CA cert to use for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.66. .spec.alerting.alertmanagers[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.67. .spec.alerting.alertmanagers[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.68. .spec.alerting.alertmanagers[].tlsConfig.cert Description Struct containing the client cert file for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.69. .spec.alerting.alertmanagers[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.70. .spec.alerting.alertmanagers[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.71. .spec.alerting.alertmanagers[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.72. .spec.apiserverConfig Description APIServerConfig allows specifying a host and auth methods to access apiserver. If left empty, Prometheus is assumed to run inside of the cluster and will discover API servers automatically and use the pod's CA certificate and bearer token file at /var/run/secrets/kubernetes.io/serviceaccount/. Type object Required host Property Type Description authorization object Authorization section for accessing apiserver basicAuth object BasicAuth allow an endpoint to authenticate over basic authentication bearerToken string Bearer token for accessing apiserver. bearerTokenFile string File to read bearer token for accessing apiserver. host string Host of apiserver. A valid string consisting of a hostname or IP followed by an optional port number tlsConfig object TLS Config to use for accessing apiserver. 6.1.73. .spec.apiserverConfig.authorization Description Authorization section for accessing apiserver Type object Property Type Description credentials object The secret's key that contains the credentials of the request credentialsFile string File to read a secret from, mutually exclusive with Credentials (from SafeAuthorization) type string Set the authentication type. Defaults to Bearer, Basic will cause an error 6.1.74. .spec.apiserverConfig.authorization.credentials Description The secret's key that contains the credentials of the request Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.75. .spec.apiserverConfig.basicAuth Description BasicAuth allow an endpoint to authenticate over basic authentication Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 6.1.76. .spec.apiserverConfig.basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.77. .spec.apiserverConfig.basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.78. .spec.apiserverConfig.tlsConfig Description TLS Config to use for accessing apiserver. Type object Property Type Description ca object Struct containing the CA cert to use for the targets. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Struct containing the client cert file for the targets. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 6.1.79. .spec.apiserverConfig.tlsConfig.ca Description Struct containing the CA cert to use for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.80. .spec.apiserverConfig.tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.81. .spec.apiserverConfig.tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.82. .spec.apiserverConfig.tlsConfig.cert Description Struct containing the client cert file for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.83. .spec.apiserverConfig.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.84. .spec.apiserverConfig.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.85. .spec.apiserverConfig.tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.86. .spec.arbitraryFSAccessThroughSMs Description ArbitraryFSAccessThroughSMs configures whether configuration based on a service monitor can access arbitrary files on the file system of the Prometheus container e.g. bearer token files. Type object Property Type Description deny boolean 6.1.87. .spec.containers Description Containers allows injecting additional containers or modifying operator generated containers. This can be used to allow adding an authentication proxy to a Prometheus pod or to change the behavior of an operator generated container. Containers described here modify an operator generated container if they share the same name and modifications are done via a strategic merge patch. The current container names are: prometheus , config-reloader , and thanos-sidecar . Overriding containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 6.1.88. .spec.containers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 6.1.89. .spec.containers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 6.1.90. .spec.containers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 6.1.91. .spec.containers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 6.1.92. .spec.containers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.93. .spec.containers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 6.1.94. .spec.containers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 6.1.95. .spec.containers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.96. .spec.containers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 6.1.97. .spec.containers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 6.1.98. .spec.containers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 6.1.99. .spec.containers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 6.1.100. .spec.containers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 6.1.101. .spec.containers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 6.1.102. .spec.containers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.103. .spec.containers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.104. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.105. .spec.containers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.106. .spec.containers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.107. .spec.containers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 6.1.108. .spec.containers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.109. .spec.containers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.110. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.111. .spec.containers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.112. .spec.containers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.113. .spec.containers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 6.1.114. .spec.containers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.115. .spec.containers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 6.1.116. .spec.containers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.117. .spec.containers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.118. .spec.containers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.119. .spec.containers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.120. .spec.containers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 6.1.121. .spec.containers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 6.1.122. .spec.containers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 6.1.123. .spec.containers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.124. .spec.containers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 6.1.125. .spec.containers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.126. .spec.containers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.127. .spec.containers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.128. .spec.containers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.129. .spec.containers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.130. .spec.containers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 6.1.131. .spec.containers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 6.1.132. .spec.containers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 6.1.133. .spec.containers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 6.1.134. .spec.containers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 6.1.135. .spec.containers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 6.1.136. .spec.containers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.137. .spec.containers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 6.1.138. .spec.containers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.139. .spec.containers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.140. .spec.containers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.141. .spec.containers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.142. .spec.containers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 6.1.143. .spec.containers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 6.1.144. .spec.containers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 6.1.145. .spec.containers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 6.1.146. .spec.excludedFromEnforcement Description List of references to PodMonitor, ServiceMonitor, Probe and PrometheusRule objects to be excluded from enforcing a namespace label of origin. Applies only if enforcedNamespaceLabel set to true. Type array 6.1.147. .spec.excludedFromEnforcement[] Description ObjectReference references a PodMonitor, ServiceMonitor, Probe or PrometheusRule object. Type object Required namespace resource Property Type Description group string Group of the referent. When not specified, it defaults to monitoring.coreos.com name string Name of the referent. When not set, all resources are matched. namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resource string Resource of the referent. 6.1.148. .spec.exemplars Description Exemplars related settings that are runtime reloadable. It requires to enable the exemplar storage feature to be effective. Type object Property Type Description maxSize integer Maximum number of exemplars stored in memory for all series. If not set, Prometheus uses its default value. A value of zero or less than zero disables the storage. 6.1.149. .spec.hostAliases Description Pods' hostAliases configuration Type array 6.1.150. .spec.hostAliases[] Description HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. Type object Required hostnames ip Property Type Description hostnames array (string) Hostnames for the above IP address. ip string IP address of the host file entry. 6.1.151. .spec.imagePullSecrets Description An optional list of references to secrets in the same namespace to use for pulling prometheus and alertmanager images from registries see http://kubernetes.io/docs/user-guide/images#specifying-imagepullsecrets-on-a-pod Type array 6.1.152. .spec.imagePullSecrets[] Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.153. .spec.initContainers Description InitContainers allows adding initContainers to the pod definition. Those can be used to e.g. fetch secrets for injection into the Prometheus configuration from external sources. Any errors during the execution of an initContainer will lead to a restart of the Pod. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ InitContainers described here modify an operator generated init containers if they share the same name and modifications are done via a strategic merge patch. The current init container name is: init-config-reloader . Overriding init containers is entirely outside the scope of what the maintainers will support and by doing so, you accept that this behaviour may break at any time without notice. Type array 6.1.154. .spec.initContainers[] Description A single application container that you want to run within a pod. Type object Required name Property Type Description args array (string) Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell command array (string) Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references USD(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell env array List of environment variables to set in the container. Cannot be updated. env[] object EnvVar represents an environment variable present in a Container. envFrom array List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. envFrom[] object EnvFromSource represents the source of a set of ConfigMaps image string Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets. imagePullPolicy string Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images lifecycle object Actions that the management system should take in response to container lifecycle events. Cannot be updated. livenessProbe object Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes name string Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. ports array List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. ports[] object ContainerPort represents a network port in a single container. readinessProbe object Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes resources object Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ securityContext object SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ startupProbe object StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes stdin boolean Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. stdinOnce boolean Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false terminationMessagePath string Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated. terminationMessagePolicy string Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. tty boolean Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. volumeDevices array volumeDevices is the list of block devices to be used by the container. volumeDevices[] object volumeDevice describes a mapping of a raw block device within a container. volumeMounts array Pod volumes to mount into the container's filesystem. Cannot be updated. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. workingDir string Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. 6.1.155. .spec.initContainers[].env Description List of environment variables to set in the container. Cannot be updated. Type array 6.1.156. .spec.initContainers[].env[] Description EnvVar represents an environment variable present in a Container. Type object Required name Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom object Source for the environment variable's value. Cannot be used if value is not empty. 6.1.157. .spec.initContainers[].env[].valueFrom Description Source for the environment variable's value. Cannot be used if value is not empty. Type object Property Type Description configMapKeyRef object Selects a key of a ConfigMap. fieldRef object Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. secretKeyRef object Selects a key of a secret in the pod's namespace 6.1.158. .spec.initContainers[].env[].valueFrom.configMapKeyRef Description Selects a key of a ConfigMap. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.159. .spec.initContainers[].env[].valueFrom.fieldRef Description Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'] , metadata.annotations['<KEY>'] , spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 6.1.160. .spec.initContainers[].env[].valueFrom.resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 6.1.161. .spec.initContainers[].env[].valueFrom.secretKeyRef Description Selects a key of a secret in the pod's namespace Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.162. .spec.initContainers[].envFrom Description List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. Type array 6.1.163. .spec.initContainers[].envFrom[] Description EnvFromSource represents the source of a set of ConfigMaps Type object Property Type Description configMapRef object The ConfigMap to select from prefix string An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. secretRef object The Secret to select from 6.1.164. .spec.initContainers[].envFrom[].configMapRef Description The ConfigMap to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap must be defined 6.1.165. .spec.initContainers[].envFrom[].secretRef Description The Secret to select from Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret must be defined 6.1.166. .spec.initContainers[].lifecycle Description Actions that the management system should take in response to container lifecycle events. Cannot be updated. Type object Property Type Description postStart object PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks preStop object PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks 6.1.167. .spec.initContainers[].lifecycle.postStart Description PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 6.1.168. .spec.initContainers[].lifecycle.postStart.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.169. .spec.initContainers[].lifecycle.postStart.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.170. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.171. .spec.initContainers[].lifecycle.postStart.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.172. .spec.initContainers[].lifecycle.postStart.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.173. .spec.initContainers[].lifecycle.preStop Description PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks Type object Property Type Description exec object Exec specifies the action to take. httpGet object HTTPGet specifies the http request to perform. tcpSocket object Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. 6.1.174. .spec.initContainers[].lifecycle.preStop.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.175. .spec.initContainers[].lifecycle.preStop.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.176. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.177. .spec.initContainers[].lifecycle.preStop.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.178. .spec.initContainers[].lifecycle.preStop.tcpSocket Description Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.179. .spec.initContainers[].livenessProbe Description Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 6.1.180. .spec.initContainers[].livenessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.181. .spec.initContainers[].livenessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 6.1.182. .spec.initContainers[].livenessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.183. .spec.initContainers[].livenessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.184. .spec.initContainers[].livenessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.185. .spec.initContainers[].livenessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.186. .spec.initContainers[].ports Description List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255 . Cannot be updated. Type array 6.1.187. .spec.initContainers[].ports[] Description ContainerPort represents a network port in a single container. Type object Required containerPort Property Type Description containerPort integer Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. hostIP string What host IP to bind the external port to. hostPort integer Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. name string If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. protocol string Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". 6.1.188. .spec.initContainers[].readinessProbe Description Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 6.1.189. .spec.initContainers[].readinessProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.190. .spec.initContainers[].readinessProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 6.1.191. .spec.initContainers[].readinessProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.192. .spec.initContainers[].readinessProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.193. .spec.initContainers[].readinessProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.194. .spec.initContainers[].readinessProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.195. .spec.initContainers[].resources Description Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.196. .spec.initContainers[].securityContext Description SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ Type object Property Type Description allowPrivilegeEscalation boolean AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows. capabilities object The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. privileged boolean Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. procMount string procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. readOnlyRootFilesystem boolean Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 6.1.197. .spec.initContainers[].securityContext.capabilities Description The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description add array (string) Added capabilities drop array (string) Removed capabilities 6.1.198. .spec.initContainers[].securityContext.seLinuxOptions Description The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 6.1.199. .spec.initContainers[].securityContext.seccompProfile Description The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 6.1.200. .spec.initContainers[].securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 6.1.201. .spec.initContainers[].startupProbe Description StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes Type object Property Type Description exec object Exec specifies the action to take. failureThreshold integer Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. grpc object GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. httpGet object HTTPGet specifies the http request to perform. initialDelaySeconds integer Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes periodSeconds integer How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. successThreshold integer Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. tcpSocket object TCPSocket specifies an action involving a TCP port. terminationGracePeriodSeconds integer Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. timeoutSeconds integer Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes 6.1.202. .spec.initContainers[].startupProbe.exec Description Exec specifies the action to take. Type object Property Type Description command array (string) Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. 6.1.203. .spec.initContainers[].startupProbe.grpc Description GRPC specifies an action involving a GRPC port. This is a beta field and requires enabling GRPCContainerProbe feature gate. Type object Required port Property Type Description port integer Port number of the gRPC service. Number must be in the range 1 to 65535. service string Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md ). If this is not specified, the default behavior is defined by gRPC. 6.1.204. .spec.initContainers[].startupProbe.httpGet Description HTTPGet specifies the http request to perform. Type object Required port Property Type Description host string Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. httpHeaders array Custom headers to set in the request. HTTP allows repeated headers. httpHeaders[] object HTTPHeader describes a custom header to be used in HTTP probes path string Path to access on the HTTP server. port integer-or-string Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. scheme string Scheme to use for connecting to the host. Defaults to HTTP. 6.1.205. .spec.initContainers[].startupProbe.httpGet.httpHeaders Description Custom headers to set in the request. HTTP allows repeated headers. Type array 6.1.206. .spec.initContainers[].startupProbe.httpGet.httpHeaders[] Description HTTPHeader describes a custom header to be used in HTTP probes Type object Required name value Property Type Description name string The header field name value string The header field value 6.1.207. .spec.initContainers[].startupProbe.tcpSocket Description TCPSocket specifies an action involving a TCP port. Type object Required port Property Type Description host string Optional: Host name to connect to, defaults to the pod IP. port integer-or-string Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. 6.1.208. .spec.initContainers[].volumeDevices Description volumeDevices is the list of block devices to be used by the container. Type array 6.1.209. .spec.initContainers[].volumeDevices[] Description volumeDevice describes a mapping of a raw block device within a container. Type object Required devicePath name Property Type Description devicePath string devicePath is the path inside of the container that the device will be mapped to. name string name must match the name of a persistentVolumeClaim in the pod 6.1.210. .spec.initContainers[].volumeMounts Description Pod volumes to mount into the container's filesystem. Cannot be updated. Type array 6.1.211. .spec.initContainers[].volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 6.1.212. .spec.podMetadata Description PodMetadata configures Labels and Annotations which are propagated to the prometheus pods. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 6.1.213. .spec.podMonitorNamespaceSelector Description Namespace's labels to match for PodMonitor discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.214. .spec.podMonitorNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.215. .spec.podMonitorNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.216. .spec.podMonitorSelector Description Experimental PodMonitors to be selected for target discovery. Deprecated: if neither this nor serviceMonitorSelector are specified, configuration is unmanaged. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.217. .spec.podMonitorSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.218. .spec.podMonitorSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.219. .spec.probeNamespaceSelector Description Experimental Namespaces to be selected for Probe discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.220. .spec.probeNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.221. .spec.probeNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.222. .spec.probeSelector Description Experimental Probes to be selected for target discovery. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.223. .spec.probeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.224. .spec.probeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.225. .spec.prometheusRulesExcludedFromEnforce Description PrometheusRulesExcludedFromEnforce - list of prometheus rules to be excluded from enforcing of adding namespace labels. Works only if enforcedNamespaceLabel set to true. Make sure both ruleNamespace and ruleName are set for each pair. Deprecated: use excludedFromEnforcement instead. Type array 6.1.226. .spec.prometheusRulesExcludedFromEnforce[] Description PrometheusRuleExcludeConfig enables users to configure excluded PrometheusRule names and their namespaces to be ignored while enforcing namespace label for alerts and metrics. Type object Required ruleName ruleNamespace Property Type Description ruleName string RuleNamespace - name of excluded rule ruleNamespace string RuleNamespace - namespace of excluded rule 6.1.227. .spec.query Description QuerySpec defines the query command line flags when starting Prometheus. Type object Property Type Description lookbackDelta string The delta difference allowed for retrieving metrics during expression evaluations. maxConcurrency integer Number of concurrent queries that can be run at once. maxSamples integer Maximum number of samples a single query can load into memory. Note that queries will fail if they would load more samples than this into memory, so this also limits the number of samples a query can return. timeout string Maximum time a query may take before being aborted. 6.1.228. .spec.remoteRead Description remoteRead is the list of remote read configurations. Type array 6.1.229. .spec.remoteRead[] Description RemoteReadSpec defines the configuration for Prometheus to read back samples from a remote endpoint. Type object Required url Property Type Description authorization object Authorization section for remote read basicAuth object BasicAuth for the URL. bearerToken string Bearer token for remote read. bearerTokenFile string File to read bearer token for remote read. headers object (string) Custom HTTP headers to be sent along with each remote read request. Be aware that headers that are set by Prometheus itself can't be overwritten. Only valid in Prometheus versions 2.26.0 and newer. name string The name of the remote read queue, it must be unique if specified. The name is used in metrics and logging in order to differentiate read configurations. Only valid in Prometheus versions 2.15.0 and newer. oauth2 object OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. proxyUrl string Optional ProxyURL. readRecent boolean Whether reads should be made for queries for time ranges that the local storage should have complete data for. remoteTimeout string Timeout for requests to the remote read endpoint. requiredMatchers object (string) An optional list of equality matchers which have to be present in a selector to query the remote read endpoint. tlsConfig object TLS Config to use for remote read. url string The URL of the endpoint to query from. 6.1.230. .spec.remoteRead[].authorization Description Authorization section for remote read Type object Property Type Description credentials object The secret's key that contains the credentials of the request credentialsFile string File to read a secret from, mutually exclusive with Credentials (from SafeAuthorization) type string Set the authentication type. Defaults to Bearer, Basic will cause an error 6.1.231. .spec.remoteRead[].authorization.credentials Description The secret's key that contains the credentials of the request Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.232. .spec.remoteRead[].basicAuth Description BasicAuth for the URL. Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 6.1.233. .spec.remoteRead[].basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.234. .spec.remoteRead[].basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.235. .spec.remoteRead[].oauth2 Description OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 6.1.236. .spec.remoteRead[].oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.237. .spec.remoteRead[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.238. .spec.remoteRead[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.239. .spec.remoteRead[].oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.240. .spec.remoteRead[].tlsConfig Description TLS Config to use for remote read. Type object Property Type Description ca object Struct containing the CA cert to use for the targets. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Struct containing the client cert file for the targets. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 6.1.241. .spec.remoteRead[].tlsConfig.ca Description Struct containing the CA cert to use for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.242. .spec.remoteRead[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.243. .spec.remoteRead[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.244. .spec.remoteRead[].tlsConfig.cert Description Struct containing the client cert file for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.245. .spec.remoteRead[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.246. .spec.remoteRead[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.247. .spec.remoteRead[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.248. .spec.remoteWrite Description remoteWrite is the list of remote write configurations. Type array 6.1.249. .spec.remoteWrite[] Description RemoteWriteSpec defines the configuration to write samples from Prometheus to a remote endpoint. Type object Required url Property Type Description authorization object Authorization section for remote write basicAuth object BasicAuth for the URL. bearerToken string Bearer token for remote write. bearerTokenFile string File to read bearer token for remote write. headers object (string) Custom HTTP headers to be sent along with each remote write request. Be aware that headers that are set by Prometheus itself can't be overwritten. Only valid in Prometheus versions 2.25.0 and newer. metadataConfig object MetadataConfig configures the sending of series metadata to the remote storage. name string The name of the remote write queue, it must be unique if specified. The name is used in metrics and logging in order to differentiate queues. Only valid in Prometheus versions 2.15.0 and newer. oauth2 object OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. proxyUrl string Optional ProxyURL. queueConfig object QueueConfig allows tuning of the remote write queue parameters. remoteTimeout string Timeout for requests to the remote write endpoint. sendExemplars boolean Enables sending of exemplars over remote write. Note that exemplar-storage itself must be enabled using the enableFeature option for exemplars to be scraped in the first place. Only valid in Prometheus versions 2.27.0 and newer. sigv4 object Sigv4 allows to configures AWS's Signature Verification 4 tlsConfig object TLS Config to use for remote write. url string The URL of the endpoint to send samples to. writeRelabelConfigs array The list of remote write relabel configurations. writeRelabelConfigs[] object RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs 6.1.250. .spec.remoteWrite[].authorization Description Authorization section for remote write Type object Property Type Description credentials object The secret's key that contains the credentials of the request credentialsFile string File to read a secret from, mutually exclusive with Credentials (from SafeAuthorization) type string Set the authentication type. Defaults to Bearer, Basic will cause an error 6.1.251. .spec.remoteWrite[].authorization.credentials Description The secret's key that contains the credentials of the request Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.252. .spec.remoteWrite[].basicAuth Description BasicAuth for the URL. Type object Property Type Description password object The secret in the service monitor namespace that contains the password for authentication. username object The secret in the service monitor namespace that contains the username for authentication. 6.1.253. .spec.remoteWrite[].basicAuth.password Description The secret in the service monitor namespace that contains the password for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.254. .spec.remoteWrite[].basicAuth.username Description The secret in the service monitor namespace that contains the username for authentication. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.255. .spec.remoteWrite[].metadataConfig Description MetadataConfig configures the sending of series metadata to the remote storage. Type object Property Type Description send boolean Whether metric metadata is sent to the remote storage or not. sendInterval string How frequently metric metadata is sent to the remote storage. 6.1.256. .spec.remoteWrite[].oauth2 Description OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. Type object Required clientId clientSecret tokenUrl Property Type Description clientId object The secret or configmap containing the OAuth2 client id clientSecret object The secret containing the OAuth2 client secret endpointParams object (string) Parameters to append to the token URL scopes array (string) OAuth2 scopes used for the token request tokenUrl string The URL to fetch the token from 6.1.257. .spec.remoteWrite[].oauth2.clientId Description The secret or configmap containing the OAuth2 client id Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.258. .spec.remoteWrite[].oauth2.clientId.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.259. .spec.remoteWrite[].oauth2.clientId.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.260. .spec.remoteWrite[].oauth2.clientSecret Description The secret containing the OAuth2 client secret Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.261. .spec.remoteWrite[].queueConfig Description QueueConfig allows tuning of the remote write queue parameters. Type object Property Type Description batchSendDeadline string BatchSendDeadline is the maximum time a sample will wait in buffer. capacity integer Capacity is the number of samples to buffer per shard before we start dropping them. maxBackoff string MaxBackoff is the maximum retry delay. maxRetries integer MaxRetries is the maximum number of times to retry a batch on recoverable errors. maxSamplesPerSend integer MaxSamplesPerSend is the maximum number of samples per send. maxShards integer MaxShards is the maximum number of shards, i.e. amount of concurrency. minBackoff string MinBackoff is the initial retry delay. Gets doubled for every retry. minShards integer MinShards is the minimum number of shards, i.e. amount of concurrency. retryOnRateLimit boolean Retry upon receiving a 429 status code from the remote-write storage. This is experimental feature and might change in the future. 6.1.262. .spec.remoteWrite[].sigv4 Description Sigv4 allows to configures AWS's Signature Verification 4 Type object Property Type Description accessKey object AccessKey is the AWS API key. If blank, the environment variable AWS_ACCESS_KEY_ID is used. profile string Profile is the named AWS profile used to authenticate. region string Region is the AWS region. If blank, the region from the default credentials chain used. roleArn string RoleArn is the named AWS profile used to authenticate. secretKey object SecretKey is the AWS API secret. If blank, the environment variable AWS_SECRET_ACCESS_KEY is used. 6.1.263. .spec.remoteWrite[].sigv4.accessKey Description AccessKey is the AWS API key. If blank, the environment variable AWS_ACCESS_KEY_ID is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.264. .spec.remoteWrite[].sigv4.secretKey Description SecretKey is the AWS API secret. If blank, the environment variable AWS_SECRET_ACCESS_KEY is used. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.265. .spec.remoteWrite[].tlsConfig Description TLS Config to use for remote write. Type object Property Type Description ca object Struct containing the CA cert to use for the targets. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Struct containing the client cert file for the targets. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 6.1.266. .spec.remoteWrite[].tlsConfig.ca Description Struct containing the CA cert to use for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.267. .spec.remoteWrite[].tlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.268. .spec.remoteWrite[].tlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.269. .spec.remoteWrite[].tlsConfig.cert Description Struct containing the client cert file for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.270. .spec.remoteWrite[].tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.271. .spec.remoteWrite[].tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.272. .spec.remoteWrite[].tlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.273. .spec.remoteWrite[].writeRelabelConfigs Description The list of remote write relabel configurations. Type array 6.1.274. .spec.remoteWrite[].writeRelabelConfigs[] Description RelabelConfig allows dynamic rewriting of the label set, being applied to samples before ingestion. It defines <metric_relabel_configs> -section of Prometheus configuration. More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs Type object Property Type Description action string Action to perform based on regex matching. Default is 'replace'. uppercase and lowercase actions require Prometheus >= 2.36. modulus integer Modulus to take of the hash of the source label values. regex string Regular expression against which the extracted value is matched. Default is '(.*)' replacement string Replacement value against which a regex replace is performed if the regular expression matches. Regex capture groups are available. Default is 'USD1' separator string Separator placed between concatenated source label values. default is ';'. sourceLabels array (string) The source labels select values from existing labels. Their content is concatenated using the configured separator and matched against the configured regular expression for the replace, keep, and drop actions. targetLabel string Label to which the resulting value is written in a replace action. It is mandatory for replace actions. Regex capture groups are available. 6.1.275. .spec.resources Description Define resources requests and limits for single Pods. Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.276. .spec.ruleNamespaceSelector Description Namespaces to be selected for PrometheusRules discovery. If unspecified, only the same namespace as the Prometheus object is in is used. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.277. .spec.ruleNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.278. .spec.ruleNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.279. .spec.ruleSelector Description A selector to select which PrometheusRules to mount for loading alerting/recording rules from. Until (excluding) Prometheus Operator v0.24.0 Prometheus Operator will migrate any legacy rule ConfigMaps to PrometheusRule custom resources selected by RuleSelector. Make sure it does not match any config maps that you do not want to be migrated. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.280. .spec.ruleSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.281. .spec.ruleSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.282. .spec.rules Description /--rules.*/ command-line arguments. Type object Property Type Description alert object /--rules.alert.*/ command-line arguments 6.1.283. .spec.rules.alert Description /--rules.alert.*/ command-line arguments Type object Property Type Description forGracePeriod string Minimum duration between alert and restored 'for' state. This is maintained only for alerts with configured 'for' time greater than grace period. forOutageTolerance string Max time to tolerate prometheus outage for restoring 'for' state of alert. resendDelay string Minimum amount of time to wait before resending an alert to Alertmanager. 6.1.284. .spec.securityContext Description SecurityContext holds pod-level security attributes and common container settings. This defaults to the default PodSecurityContext. Type object Property Type Description fsGroup integer A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows. fsGroupChangePolicy string fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows. runAsGroup integer The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. runAsNonRoot boolean Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. runAsUser integer The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seLinuxOptions object The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. seccompProfile object The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. supplementalGroups array (integer) A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows. sysctls array Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. sysctls[] object Sysctl defines a kernel parameter to be set windowsOptions object The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. 6.1.285. .spec.securityContext.seLinuxOptions Description The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. Type object Property Type Description level string Level is SELinux level label that applies to the container. role string Role is a SELinux role label that applies to the container. type string Type is a SELinux type label that applies to the container. user string User is a SELinux user label that applies to the container. 6.1.286. .spec.securityContext.seccompProfile Description The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. Type object Required type Property Type Description localhostProfile string localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost". type string type indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. 6.1.287. .spec.securityContext.sysctls Description Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. Type array 6.1.288. .spec.securityContext.sysctls[] Description Sysctl defines a kernel parameter to be set Type object Required name value Property Type Description name string Name of a property to set value string Value of a property to set 6.1.289. .spec.securityContext.windowsOptions Description The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. Type object Property Type Description gmsaCredentialSpec string GMSACredentialSpec is where the GMSA admission webhook ( https://github.com/kubernetes-sigs/windows-gmsa ) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. gmsaCredentialSpecName string GMSACredentialSpecName is the name of the GMSA credential spec to use. hostProcess boolean HostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. runAsUserName string The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. 6.1.290. .spec.serviceMonitorNamespaceSelector Description Namespace's labels to match for ServiceMonitor discovery. If nil, only check own namespace. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.291. .spec.serviceMonitorNamespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.292. .spec.serviceMonitorNamespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.293. .spec.serviceMonitorSelector Description ServiceMonitors to be selected for target discovery. Deprecated: if neither this nor podMonitorSelector are specified, configuration is unmanaged. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.294. .spec.serviceMonitorSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.295. .spec.serviceMonitorSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.296. .spec.storage Description Storage spec to specify how storage shall be used. Type object Property Type Description disableMountSubPath boolean Deprecated: subPath usage will be disabled by default in a future release, this option will become unnecessary. DisableMountSubPath allows to remove any subPath usage in volume mounts. emptyDir object EmptyDirVolumeSource to be used by the Prometheus StatefulSets. If specified, used in place of any volumeClaimTemplate. More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir ephemeral object EphemeralVolumeSource to be used by the Prometheus StatefulSets. This is a beta field in k8s 1.21, for lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes volumeClaimTemplate object A PVC spec to be used by the Prometheus StatefulSets. 6.1.297. .spec.storage.emptyDir Description EmptyDirVolumeSource to be used by the Prometheus StatefulSets. If specified, used in place of any volumeClaimTemplate. More info: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir 6.1.298. .spec.storage.ephemeral Description EphemeralVolumeSource to be used by the Prometheus StatefulSets. This is a beta field in k8s 1.21, for lower versions, starting with k8s 1.19, it requires enabling the GenericEphemeralVolume feature gate. More info: https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/#generic-ephemeral-volumes Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 6.1.299. .spec.storage.ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 6.1.300. .spec.storage.ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 6.1.301. .spec.storage.ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 6.1.302. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.303. .spec.storage.ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.304. .spec.storage.ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.305. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.306. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.307. .spec.storage.ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.308. .spec.storage.volumeClaimTemplate Description A PVC spec to be used by the Prometheus StatefulSets. Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata object EmbeddedMetadata contains metadata relevant to an EmbeddedResource. spec object Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims status object Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims 6.1.309. .spec.storage.volumeClaimTemplate.metadata Description EmbeddedMetadata contains metadata relevant to an EmbeddedResource. Type object Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names 6.1.310. .spec.storage.volumeClaimTemplate.spec Description Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 6.1.311. .spec.storage.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.312. .spec.storage.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.313. .spec.storage.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.314. .spec.storage.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.315. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.316. .spec.storage.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.317. .spec.storage.volumeClaimTemplate.status Description Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResources integer-or-string allocatedResources is the storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity integer-or-string capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contails details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. resizeStatus string resizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. 6.1.318. .spec.storage.volumeClaimTemplate.status.conditions Description conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array 6.1.319. .spec.storage.volumeClaimTemplate.status.conditions[] Description PersistentVolumeClaimCondition contails details about state of pvc Type object Required status type Property Type Description lastProbeTime string lastProbeTime is the time we probed the condition. lastTransitionTime string lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string PersistentVolumeClaimConditionType is a valid value of PersistentVolumeClaimCondition.Type 6.1.320. .spec.thanos Description Thanos configuration allows configuring various aspects of a Prometheus server in a Thanos environment. This section is experimental, it may change significantly without deprecation notice in any release. This is experimental and may change significantly without backward compatibility in any release. Type object Property Type Description additionalArgs array AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument the reconciliation will fail and an error will be logged. additionalArgs[] object Argument as part of the AdditionalArgs list. baseImage string Thanos base image if other than default. Deprecated: use 'image' instead grpcListenLocal boolean If true, the Thanos sidecar listens on the loopback interface for the gRPC endpoints. It has no effect if listenLocal is true. grpcServerTlsConfig object GRPCServerTLSConfig configures the TLS parameters for the gRPC server providing the StoreAPI. Note: Currently only the CAFile, CertFile, and KeyFile fields are supported. Maps to the '--grpc-server-tls-*' CLI args. httpListenLocal boolean If true, the Thanos sidecar listens on the loopback interface for the HTTP endpoints. It has no effect if listenLocal is true. image string Image if specified has precedence over baseImage, tag and sha combinations. Specifying the version is still necessary to ensure the Prometheus Operator knows what version of Thanos is being configured. listenLocal boolean If true, the Thanos sidecar listens on the loopback interface for the HTTP and gRPC endpoints. It takes precedence over grpcListenLocal and httpListenLocal . Deprecated: use grpcListenLocal and httpListenLocal instead. logFormat string LogFormat for Thanos sidecar to be configured with. logLevel string LogLevel for Thanos sidecar to be configured with. minTime string MinTime for Thanos sidecar to be configured with. Option can be a constant time in RFC3339 format or time duration relative to current time, such as -1d or 2h45m. Valid duration units are ms, s, m, h, d, w, y. objectStorageConfig object ObjectStorageConfig configures object storage in Thanos. Alternative to ObjectStorageConfigFile, and lower order priority. objectStorageConfigFile string ObjectStorageConfigFile specifies the path of the object storage configuration file. When used alongside with ObjectStorageConfig, ObjectStorageConfigFile takes precedence. readyTimeout string ReadyTimeout is the maximum time Thanos sidecar will wait for Prometheus to start. Eg 10m resources object Resources defines the resource requirements for the Thanos sidecar. If not provided, no requests/limits will be set sha string SHA of Thanos container image to be deployed. Defaults to the value of version . Similar to a tag, but the SHA explicitly deploys an immutable container image. Version and Tag are ignored if SHA is set. Deprecated: use 'image' instead. The image digest can be specified as part of the image URL. tag string Tag of Thanos sidecar container image to be deployed. Defaults to the value of version . Version is ignored if Tag is set. Deprecated: use 'image' instead. The image tag can be specified as part of the image URL. tracingConfig object TracingConfig configures tracing in Thanos. This is an experimental feature, it may change in any upcoming release in a breaking way. tracingConfigFile string TracingConfig specifies the path of the tracing configuration file. When used alongside with TracingConfig, TracingConfigFile takes precedence. version string Version describes the version of Thanos to use. volumeMounts array VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the thanos-sidecar container. volumeMounts[] object VolumeMount describes a mounting of a Volume within a container. 6.1.321. .spec.thanos.additionalArgs Description AdditionalArgs allows setting additional arguments for the Thanos container. The arguments are passed as-is to the Thanos container which may cause issues if they are invalid or not supported the given Thanos version. In case of an argument conflict (e.g. an argument which is already set by the operator itself) or when providing an invalid argument the reconciliation will fail and an error will be logged. Type array 6.1.322. .spec.thanos.additionalArgs[] Description Argument as part of the AdditionalArgs list. Type object Required name Property Type Description name string Name of the argument, e.g. "scrape.discovery-reload-interval". value string Argument value, e.g. 30s. Can be empty for name-only arguments (e.g. --storage.tsdb.no-lockfile) 6.1.323. .spec.thanos.grpcServerTlsConfig Description GRPCServerTLSConfig configures the TLS parameters for the gRPC server providing the StoreAPI. Note: Currently only the CAFile, CertFile, and KeyFile fields are supported. Maps to the '--grpc-server-tls-*' CLI args. Type object Property Type Description ca object Struct containing the CA cert to use for the targets. caFile string Path to the CA cert in the Prometheus container to use for the targets. cert object Struct containing the client cert file for the targets. certFile string Path to the client cert file in the Prometheus container for the targets. insecureSkipVerify boolean Disable target certificate validation. keyFile string Path to the client key file in the Prometheus container for the targets. keySecret object Secret containing the client key file for the targets. serverName string Used to verify the hostname for the targets. 6.1.324. .spec.thanos.grpcServerTlsConfig.ca Description Struct containing the CA cert to use for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.325. .spec.thanos.grpcServerTlsConfig.ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.326. .spec.thanos.grpcServerTlsConfig.ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.327. .spec.thanos.grpcServerTlsConfig.cert Description Struct containing the client cert file for the targets. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.328. .spec.thanos.grpcServerTlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.329. .spec.thanos.grpcServerTlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.330. .spec.thanos.grpcServerTlsConfig.keySecret Description Secret containing the client key file for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.331. .spec.thanos.objectStorageConfig Description ObjectStorageConfig configures object storage in Thanos. Alternative to ObjectStorageConfigFile, and lower order priority. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.332. .spec.thanos.resources Description Resources defines the resource requirements for the Thanos sidecar. If not provided, no requests/limits will be set Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.333. .spec.thanos.tracingConfig Description TracingConfig configures tracing in Thanos. This is an experimental feature, it may change in any upcoming release in a breaking way. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.334. .spec.thanos.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the thanos-sidecar container. Type array 6.1.335. .spec.thanos.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 6.1.336. .spec.tolerations Description If specified, the pod's tolerations. Type array 6.1.337. .spec.tolerations[] Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 6.1.338. .spec.topologySpreadConstraints Description If specified, the pod's topology spread constraints. Type array 6.1.339. .spec.topologySpreadConstraints[] Description TopologySpreadConstraint specifies how to spread matching pods among the given topology. Type object Required maxSkew topologyKey whenUnsatisfiable Property Type Description labelSelector object LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. matchLabelKeys array (string) MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. maxSkew integer MaxSkew describes the degree to which pods may be unevenly distributed. When whenUnsatisfiable=DoNotSchedule , it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When whenUnsatisfiable=ScheduleAnyway , it is used to give higher precedence to topologies that satisfy it. It's a required field. Default value is 1 and 0 is not allowed. minDomains integer MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats "global minimum" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so "global minimum" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default). nodeAffinityPolicy string NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. If this value is nil, the behavior is equivalent to the Honor policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. nodeTaintsPolicy string NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. If this value is nil, the behavior is equivalent to the Ignore policy. This is a alpha-level feature enabled by the NodeInclusionPolicyInPodTopologySpread feature flag. topologyKey string TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each <key, value> as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. whenUnsatisfiable string WhenUnsatisfiable indicates how to deal with a pod if it doesn't satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won't make it more imbalanced. It's a required field. 6.1.340. .spec.topologySpreadConstraints[].labelSelector Description LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.341. .spec.topologySpreadConstraints[].labelSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.342. .spec.topologySpreadConstraints[].labelSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.343. .spec.tsdb Description Defines the runtime reloadable configuration of the timeseries database (TSDB). Type object Property Type Description outOfOrderTimeWindow string Configures how old an out-of-order/out-of-bounds sample can be w.r.t. the TSDB max time. An out-of-order/out-of-bounds sample is ingested into the TSDB as long as the timestamp of the sample is >= (TSDB.MaxTime - outOfOrderTimeWindow). Out of order ingestion is an experimental feature and requires Prometheus >= v2.39.0. 6.1.344. .spec.volumeMounts Description VolumeMounts allows configuration of additional VolumeMounts on the output StatefulSet definition. VolumeMounts specified will be appended to other VolumeMounts in the prometheus container, that are generated as a result of StorageSpec objects. Type array 6.1.345. .spec.volumeMounts[] Description VolumeMount describes a mounting of a Volume within a container. Type object Required mountPath name Property Type Description mountPath string Path within the container at which the volume should be mounted. Must not contain ':'. mountPropagation string mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. name string This must match the Name of a Volume. readOnly boolean Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. subPath string Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). subPathExpr string Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references USD(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. 6.1.346. .spec.volumes Description Volumes allows configuration of additional volumes on the output StatefulSet definition. Volumes specified will be appended to other volumes that are generated as a result of StorageSpec objects. Type array 6.1.347. .spec.volumes[] Description Volume represents a named volume in a pod that may be accessed by any container in the pod. Type object Required name Property Type Description awsElasticBlockStore object awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk object azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile object azureFile represents an Azure File Service mount on the host and bind mount to the pod. cephfs object cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder object cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md configMap object configMap represents a configMap that should populate this volume csi object csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). downwardAPI object downwardAPI represents downward API about the pod that should populate this volume emptyDir object emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir ephemeral object ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. fc object fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume object flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker object flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running gcePersistentDisk object gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk gitRepo object gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. glusterfs object glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath object hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. iscsi object iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md name string name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names nfs object nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs persistentVolumeClaim object persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims photonPersistentDisk object photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume object portworxVolume represents a portworx volume attached and mounted on kubelets host machine projected object projected items for all in one resources secrets, configmaps, and downward API quobyte object quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd object rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO object scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. secret object secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret storageos object storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. vsphereVolume object vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 6.1.348. .spec.volumes[].awsElasticBlockStore Description awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore Type object Required volumeID Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). readOnly boolean readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore volumeID string volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore 6.1.349. .spec.volumes[].azureDisk Description azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. Type object Required diskName diskURI Property Type Description cachingMode string cachingMode is the Host Caching mode: None, Read Only, Read Write. diskName string diskName is the Name of the data disk in the blob storage diskURI string diskURI is the URI of data disk in the blob storage fsType string fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. kind string kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. 6.1.350. .spec.volumes[].azureFile Description azureFile represents an Azure File Service mount on the host and bind mount to the pod. Type object Required secretName shareName Property Type Description readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretName string secretName is the name of secret that contains Azure Storage Account Name and Key shareName string shareName is the azure share Name 6.1.351. .spec.volumes[].cephfs Description cephFS represents a Ceph FS mount on the host that shares a pod's lifetime Type object Required monitors Property Type Description monitors array (string) monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it path string path is Optional: Used as the mounted root, rather than the full Ceph tree, default is / readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretFile string secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it secretRef object secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it user string user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it 6.1.352. .spec.volumes[].cephfs.secretRef Description secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.353. .spec.volumes[].cinder Description cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md Type object Required volumeID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md secretRef object secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. volumeID string volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md 6.1.354. .spec.volumes[].cinder.secretRef Description secretRef is optional: points to a secret object containing parameters used to connect to OpenStack. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.355. .spec.volumes[].configMap Description configMap represents a configMap that should populate this volume Type object Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 6.1.356. .spec.volumes[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 6.1.357. .spec.volumes[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 6.1.358. .spec.volumes[].csi Description csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). Type object Required driver Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef object nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 6.1.359. .spec.volumes[].csi.nodePublishSecretRef Description nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.360. .spec.volumes[].downwardAPI Description downwardAPI represents downward API about the pod that should populate this volume Type object Property Type Description defaultMode integer Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array Items is a list of downward API volume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 6.1.361. .spec.volumes[].downwardAPI.items Description Items is a list of downward API volume file Type array 6.1.362. .spec.volumes[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 6.1.363. .spec.volumes[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 6.1.364. .spec.volumes[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 6.1.365. .spec.volumes[].emptyDir Description emptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir Type object Property Type Description medium string medium represents what type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir sizeLimit integer-or-string sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir 6.1.366. .spec.volumes[].ephemeral Description ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time. Type object Property Type Description volumeClaimTemplate object Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. 6.1.367. .spec.volumes[].ephemeral.volumeClaimTemplate Description Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil. Type object Required spec Property Type Description metadata object May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. spec object The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. 6.1.368. .spec.volumes[].ephemeral.volumeClaimTemplate.metadata Description May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. Type object 6.1.369. .spec.volumes[].ephemeral.volumeClaimTemplate.spec Description The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. resources object resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources selector object selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. 6.1.370. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSource Description dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.371. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.dataSourceRef Description dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 6.1.372. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.resources Description resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources Type object Property Type Description limits integer-or-string Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests integer-or-string Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 6.1.373. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector Description selector is a label query over volumes to consider for binding. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.374. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.375. .spec.volumes[].ephemeral.volumeClaimTemplate.spec.selector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.376. .spec.volumes[].fc Description fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine lun integer lun is Optional: FC target lun number readOnly boolean readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. targetWWNs array (string) targetWWNs is Optional: FC target worldwide names (WWNs) wwids array (string) wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously. 6.1.377. .spec.volumes[].flexVolume Description flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. Type object Required driver Property Type Description driver string driver is the name of the driver to use for this volume. fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. options object (string) options is Optional: this field holds extra command options if any. readOnly boolean readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. 6.1.378. .spec.volumes[].flexVolume.secretRef Description secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.379. .spec.volumes[].flocker Description flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running Type object Property Type Description datasetName string datasetName is Name of the dataset stored as metadata name on the dataset for Flocker should be considered as deprecated datasetUUID string datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset 6.1.380. .spec.volumes[].gcePersistentDisk Description gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk Type object Required pdName Property Type Description fsType string fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine partition integer partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk pdName string pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk 6.1.381. .spec.volumes[].gitRepo Description gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container. Type object Required repository Property Type Description directory string directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. repository string repository is the URL revision string revision is the commit hash for the specified revision. 6.1.382. .spec.volumes[].glusterfs Description glusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md Type object Required endpoints path Property Type Description endpoints string endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod path string path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod readOnly boolean readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod 6.1.383. .spec.volumes[].hostPath Description hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write. Type object Required path Property Type Description path string path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath type string type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath 6.1.384. .spec.volumes[].iscsi Description iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md Type object Required iqn lun targetPortal Property Type Description chapAuthDiscovery boolean chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication chapAuthSession boolean chapAuthSession defines whether support iSCSI Session CHAP authentication fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine initiatorName string initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface <target portal>:<volume name> will be created for the connection. iqn string iqn is the target iSCSI Qualified Name. iscsiInterface string iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). lun integer lun represents iSCSI Target Lun number. portals array (string) portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. secretRef object secretRef is the CHAP Secret for iSCSI target and initiator authentication targetPortal string targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). 6.1.385. .spec.volumes[].iscsi.secretRef Description secretRef is the CHAP Secret for iSCSI target and initiator authentication Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.386. .spec.volumes[].nfs Description nfs represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs Type object Required path server Property Type Description path string path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs readOnly boolean readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs server string server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs 6.1.387. .spec.volumes[].persistentVolumeClaim Description persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims Type object Required claimName Property Type Description claimName string claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims readOnly boolean readOnly Will force the ReadOnly setting in VolumeMounts. Default false. 6.1.388. .spec.volumes[].photonPersistentDisk Description photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine Type object Required pdID Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. pdID string pdID is the ID that identifies Photon Controller persistent disk 6.1.389. .spec.volumes[].portworxVolume Description portworxVolume represents a portworx volume attached and mounted on kubelets host machine Type object Required volumeID Property Type Description fsType string fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. volumeID string volumeID uniquely identifies a Portworx volume 6.1.390. .spec.volumes[].projected Description projected items for all in one resources secrets, configmaps, and downward API Type object Property Type Description defaultMode integer defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. sources array sources is the list of volume projections sources[] object Projection that may be projected along with other supported volume types 6.1.391. .spec.volumes[].projected.sources Description sources is the list of volume projections Type array 6.1.392. .spec.volumes[].projected.sources[] Description Projection that may be projected along with other supported volume types Type object Property Type Description configMap object configMap information about the configMap data to project downwardAPI object downwardAPI information about the downwardAPI data to project secret object secret information about the secret data to project serviceAccountToken object serviceAccountToken is information about the serviceAccountToken data to project 6.1.393. .spec.volumes[].projected.sources[].configMap Description configMap information about the configMap data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional specify whether the ConfigMap or its keys must be defined 6.1.394. .spec.volumes[].projected.sources[].configMap.items Description items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 6.1.395. .spec.volumes[].projected.sources[].configMap.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 6.1.396. .spec.volumes[].projected.sources[].downwardAPI Description downwardAPI information about the downwardAPI data to project Type object Property Type Description items array Items is a list of DownwardAPIVolume file items[] object DownwardAPIVolumeFile represents information to create the file containing the pod field 6.1.397. .spec.volumes[].projected.sources[].downwardAPI.items Description Items is a list of DownwardAPIVolume file Type array 6.1.398. .spec.volumes[].projected.sources[].downwardAPI.items[] Description DownwardAPIVolumeFile represents information to create the file containing the pod field Type object Required path Property Type Description fieldRef object Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. mode integer Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string Required: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..' resourceFieldRef object Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. 6.1.399. .spec.volumes[].projected.sources[].downwardAPI.items[].fieldRef Description Required: Selects a field of the pod: only annotations, labels, name and namespace are supported. Type object Required fieldPath Property Type Description apiVersion string Version of the schema the FieldPath is written in terms of, defaults to "v1". fieldPath string Path of the field to select in the specified API version. 6.1.400. .spec.volumes[].projected.sources[].downwardAPI.items[].resourceFieldRef Description Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported. Type object Required resource Property Type Description containerName string Container name: required for volumes, optional for env vars divisor integer-or-string Specifies the output format of the exposed resources, defaults to "1" resource string Required: resource to select 6.1.401. .spec.volumes[].projected.sources[].secret Description secret information about the secret data to project Type object Property Type Description items array items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean optional field specify whether the Secret or its key must be defined 6.1.402. .spec.volumes[].projected.sources[].secret.items Description items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 6.1.403. .spec.volumes[].projected.sources[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 6.1.404. .spec.volumes[].projected.sources[].serviceAccountToken Description serviceAccountToken is information about the serviceAccountToken data to project Type object Required path Property Type Description audience string audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. expirationSeconds integer expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. path string path is the path relative to the mount point of the file to project the token into. 6.1.405. .spec.volumes[].quobyte Description quobyte represents a Quobyte mount on the host that shares a pod's lifetime Type object Required registry volume Property Type Description group string group to map volume access to Default is no group readOnly boolean readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. registry string registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes tenant string tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin user string user to map volume access to Defaults to serivceaccount user volume string volume is a string that references an already created Quobyte volume by name. 6.1.406. .spec.volumes[].rbd Description rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md Type object Required image monitors Property Type Description fsType string fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine image string image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it keyring string keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it monitors array (string) monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it pool string pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it readOnly boolean readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it secretRef object secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it user string user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it 6.1.407. .spec.volumes[].rbd.secretRef Description secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.408. .spec.volumes[].scaleIO Description scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. Type object Required gateway secretRef system Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". gateway string gateway is the host address of the ScaleIO API Gateway. protectionDomain string protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. readOnly boolean readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. sslEnabled boolean sslEnabled Flag enable/disable SSL communication with Gateway, default false storageMode string storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. storagePool string storagePool is the ScaleIO Storage Pool associated with the protection domain. system string system is the name of the storage system as configured in ScaleIO. volumeName string volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. 6.1.409. .spec.volumes[].scaleIO.secretRef Description secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.410. .spec.volumes[].secret Description secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret Type object Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. items[] object Maps a string key to a path within a volume. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 6.1.411. .spec.volumes[].secret.items Description items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. Type array 6.1.412. .spec.volumes[].secret.items[] Description Maps a string key to a path within a volume. Type object Required key path Property Type Description key string key is the key to project. mode integer mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. path string path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. 6.1.413. .spec.volumes[].storageos Description storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. Type object Property Type Description fsType string fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. readOnly boolean readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. secretRef object secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. volumeName string volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. volumeNamespace string volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. 6.1.414. .spec.volumes[].storageos.secretRef Description secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. Type object Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? 6.1.415. .spec.volumes[].vsphereVolume Description vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine Type object Required volumePath Property Type Description fsType string fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. storagePolicyID string storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. storagePolicyName string storagePolicyName is the storage Policy Based Management (SPBM) profile name. volumePath string volumePath is the path that identifies vSphere volume vmdk 6.1.416. .spec.web Description Defines the web command line flags when starting Prometheus. Type object Property Type Description httpConfig object Defines HTTP parameters for web server. pageTitle string The prometheus web page title tlsConfig object Defines the TLS parameters for HTTPS. 6.1.417. .spec.web.httpConfig Description Defines HTTP parameters for web server. Type object Property Type Description headers object List of headers that can be added to HTTP responses. http2 boolean Enable HTTP/2 support. Note that HTTP/2 is only supported with TLS. When TLSConfig is not configured, HTTP/2 will be disabled. Whenever the value of the field changes, a rolling update will be triggered. 6.1.418. .spec.web.httpConfig.headers Description List of headers that can be added to HTTP responses. Type object Property Type Description contentSecurityPolicy string Set the Content-Security-Policy header to HTTP responses. Unset if blank. strictTransportSecurity string Set the Strict-Transport-Security header to HTTP responses. Unset if blank. Please make sure that you use this with care as this header might force browsers to load Prometheus and the other applications hosted on the same domain and subdomains over HTTPS. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security xContentTypeOptions string Set the X-Content-Type-Options header to HTTP responses. Unset if blank. Accepted value is nosniff. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Content-Type-Options xFrameOptions string Set the X-Frame-Options header to HTTP responses. Unset if blank. Accepted values are deny and sameorigin. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options xXSSProtection string Set the X-XSS-Protection header to all responses. Unset if blank. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-XSS-Protection 6.1.419. .spec.web.tlsConfig Description Defines the TLS parameters for HTTPS. Type object Required cert keySecret Property Type Description cert object Contains the TLS certificate for the server. cipherSuites array (string) List of supported cipher suites for TLS versions up to TLS 1.2. If empty, Go default cipher suites are used. Available cipher suites are documented in the go documentation: https://golang.org/pkg/crypto/tls/#pkg-constants clientAuthType string Server policy for client authentication. Maps to ClientAuth Policies. For more detail on clientAuth options: https://golang.org/pkg/crypto/tls/#ClientAuthType client_ca object Contains the CA certificate for client certificate authentication to the server. curvePreferences array (string) Elliptic curves that will be used in an ECDHE handshake, in preference order. Available curves are documented in the go documentation: https://golang.org/pkg/crypto/tls/#CurveID keySecret object Secret containing the TLS key for the server. maxVersion string Maximum TLS version that is acceptable. Defaults to TLS13. minVersion string Minimum TLS version that is acceptable. Defaults to TLS12. preferServerCipherSuites boolean Controls whether the server selects the client's most preferred cipher suite, or the server's most preferred cipher suite. If true then the server's preference, as expressed in the order of elements in cipherSuites, is used. 6.1.420. .spec.web.tlsConfig.cert Description Contains the TLS certificate for the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.421. .spec.web.tlsConfig.cert.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.422. .spec.web.tlsConfig.cert.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.423. .spec.web.tlsConfig.client_ca Description Contains the CA certificate for client certificate authentication to the server. Type object Property Type Description configMap object ConfigMap containing data to use for the targets. secret object Secret containing data to use for the targets. 6.1.424. .spec.web.tlsConfig.client_ca.configMap Description ConfigMap containing data to use for the targets. Type object Required key Property Type Description key string The key to select. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the ConfigMap or its key must be defined 6.1.425. .spec.web.tlsConfig.client_ca.secret Description Secret containing data to use for the targets. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.426. .spec.web.tlsConfig.keySecret Description Secret containing the TLS key for the server. Type object Required key Property Type Description key string The key of the secret to select from. Must be a valid secret key. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid? optional boolean Specify whether the Secret or its key must be defined 6.1.427. .status Description Most recent observed status of the Prometheus cluster. Read-only. More info: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#spec-and-status Type object Required availableReplicas paused replicas unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this Prometheus deployment. conditions array The current state of the Prometheus deployment. conditions[] object PrometheusCondition represents the state of the resources associated with the Prometheus resource. paused boolean Represents whether any actions on the underlying managed objects are being performed. Only delete actions will be performed. replicas integer Total number of non-terminated pods targeted by this Prometheus deployment (their labels match the selector). shardStatuses array The list has one entry per shard. Each entry provides a summary of the shard status. shardStatuses[] object unavailableReplicas integer Total number of unavailable pods targeted by this Prometheus deployment. updatedReplicas integer Total number of non-terminated pods targeted by this Prometheus deployment that have the desired version spec. 6.1.428. .status.conditions Description The current state of the Prometheus deployment. Type array 6.1.429. .status.conditions[] Description PrometheusCondition represents the state of the resources associated with the Prometheus resource. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string Human-readable message indicating details for the condition's last transition. observedGeneration integer ObservedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string Reason for the condition's last transition. status string status of the condition. type string Type of the condition being reported. 6.1.430. .status.shardStatuses Description The list has one entry per shard. Each entry provides a summary of the shard status. Type array 6.1.431. .status.shardStatuses[] Description Type object Required availableReplicas replicas shardID unavailableReplicas updatedReplicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this shard. replicas integer Total number of pods targeted by this shard. shardID string Identifier of the shard. unavailableReplicas integer Total number of unavailable pods targeted by this shard. updatedReplicas integer Total number of non-terminated pods targeted by this shard that have the desired spec. 6.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/prometheuses GET : list objects of kind Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses DELETE : delete collection of Prometheus GET : list objects of kind Prometheus POST : create Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name} DELETE : delete Prometheus GET : read the specified Prometheus PATCH : partially update the specified Prometheus PUT : replace the specified Prometheus /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/status GET : read status of the specified Prometheus PATCH : partially update status of the specified Prometheus PUT : replace status of the specified Prometheus 6.2.1. /apis/monitoring.coreos.com/v1/prometheuses Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind Prometheus Table 6.2. HTTP responses HTTP code Reponse body 200 - OK PrometheusList schema 401 - Unauthorized Empty 6.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses Table 6.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Prometheus Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Prometheus Table 6.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.8. HTTP responses HTTP code Reponse body 200 - OK PrometheusList schema 401 - Unauthorized Empty HTTP method POST Description create Prometheus Table 6.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.10. Body parameters Parameter Type Description body Prometheus schema Table 6.11. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 202 - Accepted Prometheus schema 401 - Unauthorized Empty 6.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the Prometheus namespace string object name and auth scope, such as for teams and projects Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete Prometheus Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Prometheus Table 6.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.18. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Prometheus Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body Patch schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Prometheus Table 6.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.23. Body parameters Parameter Type Description body Prometheus schema Table 6.24. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 401 - Unauthorized Empty 6.2.4. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheuses/{name}/status Table 6.25. Global path parameters Parameter Type Description name string name of the Prometheus namespace string object name and auth scope, such as for teams and projects Table 6.26. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Prometheus Table 6.27. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.28. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Prometheus Table 6.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.30. Body parameters Parameter Type Description body Patch schema Table 6.31. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Prometheus Table 6.32. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.33. Body parameters Parameter Type Description body Prometheus schema Table 6.34. HTTP responses HTTP code Reponse body 200 - OK Prometheus schema 201 - Created Prometheus schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring_apis/prometheus-monitoring-coreos-com-v1
|
Appendix A. Applying Custom Configuration to Red Hat Satellite
|
Appendix A. Applying Custom Configuration to Red Hat Satellite When you install and configure Satellite for the first time using satellite-installer , you can specify that the DNS and DHCP configuration files are not to be managed by Puppet using the installer flags --foreman-proxy-dns-managed=false and --foreman-proxy-dhcp-managed=false . If these flags are not specified during the initial installer run, rerunning of the installer overwrites all manual changes, for example, rerun for upgrade purposes. If changes are overwritten, you must run the restore procedure to restore the manual changes. For more information, see Restoring Manual Changes Overwritten by a Puppet Run . To view all installer flags available for custom configuration, run satellite-installer --scenario satellite --full-help . Some Puppet classes are not exposed to the Satellite installer. To manage them manually and prevent the installer from overwriting their values, specify the configuration values by adding entries to configuration file /etc/foreman-installer/custom-hiera.yaml . This configuration file is in YAML format, consisting of one entry per line in the format of <puppet class>::<parameter name>: <value> . Configuration values specified in this file persist across installer reruns. Common examples include: For Apache, to set the ServerTokens directive to only return the Product name: To turn off the Apache server signature entirely: The Puppet modules for the Satellite installer are stored under /usr/share/foreman-installer/modules . Check the .pp files (for example: moduleName /manifests/ example .pp) to look up the classes, parameters, and values. Alternatively, use the grep command to do keyword searches. Setting some values may have unintended consequences that affect the performance or functionality of Red Hat Satellite. Consider the impact of the changes before you apply them, and test the changes in a non-production environment first. If you do not have a non-production Satellite environment, run the Satellite installer with the --noop and --verbose options. If your changes cause problems, remove the offending lines from custom-hiera.yaml and rerun the Satellite installer. If you have any specific questions about whether a particular value is safe to alter, contact Red Hat support.
|
[
"apache::server_tokens: Prod",
"apache::server_signature: Off"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/installing_satellite_server_in_a_connected_network_environment/applying-custom-configuration_satellite
|
Chapter 7. Deleting a system from Insights inventory
|
Chapter 7. Deleting a system from Insights inventory You can delete a system from the Red Hat Hybrid Cloud Console inventory so that the system is no longer visible in the Red Hat Insights for Red Hat Enterprise Linux Inventory or advisor service Systems list. The Insights client will be unregistered on the system and no longer report data to Red Hat Insights for Red Hat Enterprise Linux. To delete a system, complete the steps in the procedure below that is most relevant to your use case. Procedure 1: Delete using the Insights client Enter the following command on the system command line: Procedure 2: Delete from the Red Hat Satellite 6 UI Log in to the Satellite web UI. Navigate to Insights > Inventory. Select the system profile to be unregistered. Click Actions > Unregister . Procedure 3: Delete using the Red Hat Insights API Use this option only when the actual system is destroyed/re-installed. If you use the DELETE API without unregistering the client, hosts will reappear the time the client uploads data. Get the list of system profiles from inventory. If the json_pp command does not exist on the system then install the perl-JSON-PP package. Get the ID of the system from the hosts.json file and confirm system details; for example, "id" : "f59716a6-5d64-4901-b65f-788b1aee25cc". Delete the system profile using the following command:
|
[
"insights-client --unregister",
"curl -k --user PORTALUSERNAME https://console.redhat.com/api/inventory/v1/hosts | json_pp > hosts.json",
"yum install perl-JSON-PP",
"curl -k --user PORTALUSERNAME https://console.redhat.com/api/inventory/v1/hosts/f59716a6-5d64-4901-b65f-788b1aee25cc",
"curl -k --user PORTALUSERNAME -X \"DELETE\" https://console.redhat.com/api/inventory/v1/hosts/f59716a6-5d64-4901-b65f-788b1aee25cc"
] |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_rhel_configuration_issues_using_the_red_hat_insights_advisor_service_with_fedramp/assembly-adv-assess-deleting-system-inventory
|
Deploying and Managing Streams for Apache Kafka on OpenShift
|
Deploying and Managing Streams for Apache Kafka on OpenShift Red Hat Streams for Apache Kafka 2.7 Deploy and manage Streams for Apache Kafka 2.7 on OpenShift Container Platform
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/index
|
Chapter 2. Understanding authentication
|
Chapter 2. Understanding authentication For users to interact with OpenShift Container Platform, they must first authenticate to the cluster. The authentication layer identifies the user associated with requests to the OpenShift Container Platform API. The authorization layer then uses information about the requesting user to determine if the request is allowed. As an administrator, you can configure authentication for OpenShift Container Platform. 2.1. Users A user in OpenShift Container Platform is an entity that can make requests to the OpenShift Container Platform API. An OpenShift Container Platform User object represents an actor which can be granted permissions in the system by adding roles to them or to their groups. Typically, this represents the account of a developer or administrator that is interacting with OpenShift Container Platform. Several types of users can exist: User type Description Regular users This is the way most interactive OpenShift Container Platform users are represented. Regular users are created automatically in the system upon first login or can be created via the API. Regular users are represented with the User object. Examples: joe alice System users Many of these are created automatically when the infrastructure is defined, mainly for the purpose of enabling the infrastructure to interact with the API securely. They include a cluster administrator (with access to everything), a per-node user, users for use by routers and registries, and various others. Finally, there is an anonymous system user that is used by default for unauthenticated requests. Examples: system:admin system:openshift-registry system:node:node1.example.com Service accounts These are special system users associated with projects; some are created automatically when the project is first created, while project administrators can create more for the purpose of defining access to the contents of each project. Service accounts are represented with the ServiceAccount object. Examples: system:serviceaccount:default:deployer system:serviceaccount:foo:builder Each user must authenticate in some way to access OpenShift Container Platform. API requests with no authentication or invalid authentication are authenticated as requests by the anonymous system user. After authentication, policy determines what the user is authorized to do. 2.2. Groups A user can be assigned to one or more groups , each of which represent a certain set of users. Groups are useful when managing authorization policies to grant permissions to multiple users at once, for example allowing access to objects within a project, versus granting them to users individually. In addition to explicitly defined groups, there are also system groups, or virtual groups , that are automatically provisioned by the cluster. The following default virtual groups are most important: Virtual group Description system:authenticated Automatically associated with all authenticated users. system:authenticated:oauth Automatically associated with all users authenticated with an OAuth access token. system:unauthenticated Automatically associated with all unauthenticated users. 2.3. API authentication Requests to the OpenShift Container Platform API are authenticated using the following methods: OAuth access tokens Obtained from the OpenShift Container Platform OAuth server using the <namespace_route> /oauth/authorize and <namespace_route> /oauth/token endpoints. Sent as an Authorization: Bearer... header. Sent as a websocket subprotocol header in the form base64url.bearer.authorization.k8s.io.<base64url-encoded-token> for websocket requests. X.509 client certificates Requires an HTTPS connection to the API server. Verified by the API server against a trusted certificate authority bundle. The API server creates and distributes certificates to controllers to authenticate themselves. Any request with an invalid access token or an invalid certificate is rejected by the authentication layer with a 401 error. If no access token or certificate is presented, the authentication layer assigns the system:anonymous virtual user and the system:unauthenticated virtual group to the request. This allows the authorization layer to determine which requests, if any, an anonymous user is allowed to make. 2.3.1. OpenShift Container Platform OAuth server The OpenShift Container Platform master includes a built-in OAuth server. Users obtain OAuth access tokens to authenticate themselves to the API. When a person requests a new OAuth token, the OAuth server uses the configured identity provider to determine the identity of the person making the request. It then determines what user that identity maps to, creates an access token for that user, and returns the token for use. 2.3.1.1. OAuth token requests Every request for an OAuth token must specify the OAuth client that will receive and use the token. The following OAuth clients are automatically created when starting the OpenShift Container Platform API: OAuth client Usage openshift-browser-client Requests tokens at <namespace_route>/oauth/token/request with a user-agent that can handle interactive logins. [1] openshift-challenging-client Requests tokens with a user-agent that can handle WWW-Authenticate challenges. <namespace_route> refers to the namespace route. This is found by running the following command: USD oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host All requests for OAuth tokens involve a request to <namespace_route>/oauth/authorize . Most authentication integrations place an authenticating proxy in front of this endpoint, or configure OpenShift Container Platform to validate credentials against a backing identity provider. Requests to <namespace_route>/oauth/authorize can come from user-agents that cannot display interactive login pages, such as the CLI. Therefore, OpenShift Container Platform supports authenticating using a WWW-Authenticate challenge in addition to interactive login flows. If an authenticating proxy is placed in front of the <namespace_route>/oauth/authorize endpoint, it sends unauthenticated, non-browser user-agents WWW-Authenticate challenges rather than displaying an interactive login page or redirecting to an interactive login flow. Note To prevent cross-site request forgery (CSRF) attacks against browser clients, only send Basic authentication challenges with if a X-CSRF-Token header is on the request. Clients that expect to receive Basic WWW-Authenticate challenges must set this header to a non-empty value. If the authenticating proxy cannot support WWW-Authenticate challenges, or if OpenShift Container Platform is configured to use an identity provider that does not support WWW-Authenticate challenges, you must use a browser to manually obtain a token from <namespace_route>/oauth/token/request . 2.3.1.2. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 2.3.1.3. Authentication metrics for Prometheus OpenShift Container Platform captures the following Prometheus system metrics during authentication attempts: openshift_auth_basic_password_count counts the number of oc login user name and password attempts. openshift_auth_basic_password_count_result counts the number of oc login user name and password attempts by result, success or error . openshift_auth_form_password_count counts the number of web console login attempts. openshift_auth_form_password_count_result counts the number of web console login attempts by result, success or error . openshift_auth_password_total counts the total number of oc login and web console login attempts.
|
[
"oc get route oauth-openshift -n openshift-authentication -o json | jq .spec.host"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/authentication_and_authorization/understanding-authentication
|
Chapter 7. Installing a cluster on AWS in a restricted network
|
Chapter 7. Installing a cluster on AWS in a restricted network In OpenShift Container Platform version 4.12, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC). 7.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 7.2. About installations in restricted networks In OpenShift Container Platform 4.12, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 7.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 7.3. About using a custom VPC In OpenShift Container Platform 4.12, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 7.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Amazon VPC console wizard configurations and Work with VPCs and subnets in the AWS documentation for more information on creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone field in the install-config.yaml file. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 7.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 7.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 7.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 7.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 7.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 7.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. Have the imageContentSources values that were generated during mirror registry creation. Obtain the contents of the certificate for your mirror registry. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the subnets for the VPC to install the cluster in: subnets: - subnet-1 - subnet-2 - subnet-3 Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Make any other modifications to the install-config.yaml file that you require. You can find more information about the available parameters in the Installation configuration parameters section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 7.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 7.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings platform.aws.lbType Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 7.6.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 7.4. Optional AWS parameters Parameter Description Values compute.platform.aws.amiID The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. compute.platform.aws.iamRole A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. compute.platform.aws.rootVolume.iops The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . compute.platform.aws.rootVolume.size The size in GiB of the root volume. Integer, for example 500 . compute.platform.aws.rootVolume.type The type of the root volume. Valid AWS EBS volume type , such as io1 . compute.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . compute.platform.aws.type The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. compute.platform.aws.zones The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . compute.aws.region The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. controlPlane.platform.aws.amiID The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. controlPlane.platform.aws.iamRole A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. controlPlane.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . controlPlane.platform.aws.type The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. controlPlane.platform.aws.zones The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . controlPlane.aws.region The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . platform.aws.amiID The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. platform.aws.hostedZone An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . platform.aws.serviceEndpoints.name The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name. platform.aws.serviceEndpoints.url The AWS service endpoint URL. The URL must use the https protocol and the host must trust the certificate. Valid AWS service endpoint URL. platform.aws.userTags A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. platform.aws.propagateUserTags A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . platform.aws.subnets If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. Valid subnet IDs. 7.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 7.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 7.6.3. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 12 14 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 23 Provide the contents of the certificate file that you used for your mirror registry. 24 Provide the imageContentSources section from the output of the command to mirror the repository. 7.6.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 7.7. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 7.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 7.9. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 7.10. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 7.11. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 7.12. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"subnets: - subnet-1 - subnet-2 - subnet-3",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/installing-restricted-networks-aws-installer-provisioned
|
Chapter 4. Setting up Key Archival and Recovery
|
Chapter 4. Setting up Key Archival and Recovery For more information on Key Archival and Recovery, see the Archiving, Recovering, and Rotating Keys section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . This chapter explains how to setup the Key Recovery Authority (KRA), previously known as Data Recovery Manager (DRM), to archive private keys and to recover archived keys for restoring encrypted data. Note This chapter only discusses archiving keys through client-side key generation. Server-side key generation and archivals, whether it's initiated through TPS, or through CA's End Entity portal, are not discussed here. For information on smart card key recovery, see Section 6.11, "Setting Up Server-side Key Generation" . For information on server-side key generation provided at the CA's EE portal, see Section 5.2.2, "Generating CSRs Using Server-Side Key Generation" . Note Gemalto SafeNet LunaSA only supports PKI private key extraction in its CKE - Key Export model, and only in non-FIPS mode. The LunaSA Cloning model and the CKE model in FIPS mode do not support PKI private key extraction. When KRA is installed, it joins a security domain, and is paired up with the CA. At such time, it is configured to archive and recover private encryption keys. However, if the KRA certificates are issued by an external CA rather than one of the CAs within the security domain, then the key archival and recovery process must be set up manually. For more information, see the Manually Setting up Key Archival section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . Note In a cloned environment, it is necessary to set up key archival and recovery manually. For more information, see the Updating CA-KRA Connector Information After Cloning section in the Red Hat Certificate System Planning, Installation, and Deployment Guide . 4.1. Configuring Agent-Approved Key Recovery in the Console Note While the number of key recovery agents can be configured in the Console, the group to use can only be set directly in the CS.cfg file. The Console uses the Key Recovery Authority Agents Group by default. Open the KRA's console. For example: Click the Key Recovery Authority link in the left navigation tree. Enter the number of agents to use to approve key recover in the Required Number of Agents field. Note For more information on how to configure agent-approved key recovery in the CS.cfg file, see the Configuring Agent-Approved Key Recovery in the Command Line section in the Red Hat Certificate System Planning, Installation, and Deployment Guide .
|
[
"pkiconsole https://server.example.com:8443/kra"
] |
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/key_recovery_authority
|
Chapter 62. Servers and Services
|
Chapter 62. Servers and Services No clear indication of profile activation error in the Tuned service Errors in the Tuned service configuration or errors occurring when loading Tuned profiles are in some cases not shown in the output of the systemctl status tuned command. As a consequence, if errors occur that prevent Tuned from loading, Tuned sometimes enters a state with no profile activated. To view possible error messages, consult the output of the tuned-adm active command and check the contents of the /var/log/tuned/tuned.log file. (BZ# 1385838 ) db_hotbackup -c should be used with caution The db_hotbackup command with the -c option must be run by the user that owns the database. If the user is different and the log file reaches its maximal size, a new log file is created with an ownership of the user that ran the command, which consequently makes the database unusable for its owner. This note has been added to the db_hotbackup(1) manual page. (BZ# 1460077 ) Setting ListenStream= options in rpcbind.socket causes systemd-logind to fail and SSH connections to be delayed Setting the ListenStream= options in the rpcbind.socket unit file currently causes a failure of the systemd-logind service and a delay in SSH connections that import system users from a NIS database. To work around the problem, remove lines with the ListenStream= option from rpcbind.socket . (BZ#1425758) ReaR recovery process fails on non-UEFI systems with the grub2-efi-x64 package installed Installing the grub2-efi-x64 package, which contains the GRUB2 boot loader for UEFI systems, changes the file /boot/grub2/grubenv into a dead absolute symlink on systems which do not use UEFI firmware. When attempting to recover such a system using the ReaR (Relax and Recover) recovery tool, the process fails and the system is rendered unbootable. To work around this problem, do not install the grub2-efi-x64 package on systems where it is not required (systems without UEFI firmware). (BZ# 1498748 ) ISO images generated by ReaR with Linux TSM fail to work The password store has changed in the Linux TSM (Tivoli Storage Manager) client versions 8.1.2 and above. This means ISO images generated by ReaR using TSM will not work, as the TSM node password and encryption key will not be included in the ISO file. To fix this problem, add the following line into the /etc/rear/local.conf or /etc/rear/site.conf configuration file: (BZ# 1534646 ) Unexpected problems with the dbus rebase The dbus package rebase with its configuration changes can cause unexpected problems. Thus, it is recommended to avoid the following actions: updating only the dbus service updating only parts of the system updating from a graphical session On the contrary, it is recommended to reboot after executing the yum update command as updating several major components including dbus without reboot rarely works as expected. (BZ# 1550582 )
|
[
"COPY_AS_IS_TSM=( /etc/adsm /opt/tivoli/tsm/client /usr/local/ibm/gsk8* )"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/known_issues_servers_and_services
|
Installing Red Hat Developer Hub on Google Kubernetes Engine
|
Installing Red Hat Developer Hub on Google Kubernetes Engine Red Hat Developer Hub 1.4 Red Hat Customer Content Services
|
[
"gcloud container clusters get-credentials <cluster-name> \\ 1 --location=<cluster-location> 2",
"create namespace rhdh-operator",
"-n rhdh-operator create secret docker-registry rhdh-pull-secret --docker-server=registry.redhat.io --docker-username=<user_name> \\ 1 --docker-password=<password> \\ 2 --docker-email=<email> 3",
"cat <<EOF | kubectl -n rhdh-operator apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-catalog spec: sourceType: grpc image: registry.redhat.io/redhat/redhat-operator-index:v4.17 secrets: - \"rhdh-pull-secret\" displayName: Red Hat Operators EOF",
"cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhdh-operator-group EOF",
"cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhdh namespace: rhdh-operator spec: channel: fast installPlanApproval: Automatic name: rhdh source: redhat-catalog sourceNamespace: rhdh-operator startingCSV: rhdh-operator.v1.4.2 EOF",
"-n rhdh-operator get pods -w",
"-n rhdh-operator patch deployment rhdh.fast --patch '{\"spec\":{\"template\":{\"spec\":{\"imagePullSecrets\":[{\"name\":\"rhdh-pull-secret\"}]}}}}' --type=merge",
"-n rhdh-operator edit configmap backstage-default-config",
"db-statefulset.yaml: | apiVersion: apps/v1 kind: StatefulSet --- TRUNCATED --- spec: --- TRUNCATED --- restartPolicy: Always securityContext: # You can assign any random value as fsGroup fsGroup: 2000 serviceAccount: default serviceAccountName: default --- TRUNCATED ---",
"deployment.yaml: | apiVersion: apps/v1 kind: Deployment --- TRUNCATED --- spec: securityContext: # You can assign any random value as fsGroup fsGroup: 3000 automountServiceAccountToken: false --- TRUNCATED ---",
"service.yaml: | apiVersion: v1 kind: Service spec: # NodePort is required for the ALB to route to the Service type: NodePort --- TRUNCATED ---",
"apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: \"app-config-rhdh.yaml\": | app: title: Red Hat Developer Hub baseUrl: https://<rhdh_domain_name> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: \"USD{BACKEND_SECRET}\" baseUrl: https://<rhdh_domain_name> cors: origin: https://<rhdh_domain_name>",
"apiVersion: v1 kind: Secret metadata: name: my-rhdh-secrets stringData: # TODO: See https://backstage.io/docs/auth/service-to-service-auth/#setup BACKEND_SECRET: \"xxx\"",
"node-p'require(\"crypto\").randomBytes(24).toString(\"base64\")'",
"patch serviceaccount default -p '{\"imagePullSecrets\": [{\"name\": \"rhdh-pull-secret\"}]}' -n <your_namespace>",
"apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: # This is the name of your Developer Hub instance name: my-rhdh spec: application: imagePullSecrets: - \"rhdh-pull-secret\" route: enabled: false appConfig: configMaps: - name: \"app-config-rhdh\" extraEnvs: secrets: - name: \"my-rhdh-secrets\"",
"apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: <rhdh_certificate_name> spec: domains: - <rhdh_domain_name>",
"apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: <ingress_security_config> spec: sslPolicy: gke-ingress-ssl-policy-https redirectToHttps: enabled: true",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: # TODO: this the name of your Developer Hub Ingress name: my-rhdh annotations: # If the class annotation is not specified it defaults to \"gce\". kubernetes.io/ingress.class: \"gce\" kubernetes.io/ingress.global-static-ip-name: <ADDRESS_NAME> networking.gke.io/managed-certificates: <rhdh_certificate_name> networking.gke.io/v1beta1.FrontendConfig: <ingress_security_config> spec: ingressClassName: gce rules: # TODO: Set your application domain name. - host: <rhdh_domain_name> http: paths: - path: / pathType: Prefix backend: service: # TODO: my-rhdh is the name of your `Backstage` custom resource. # Adjust if you changed it! name: backstage-my-rhdh port: name: http-backend",
"helm repo add openshift-helm-charts https://charts.openshift.io/",
"-n <your-namespace> create secret docker-registry rhdh-pull-secret \\ 1 --docker-server=registry.redhat.io --docker-username=<user_name> \\ 2 --docker-password=<password> \\ 3 --docker-email=<email> 4",
"apiVersion: networking.gke.io/v1 kind: ManagedCertificate metadata: name: <rhdh_certificate_name> spec: domains: - <rhdh_domain_name>",
"apiVersion: networking.gke.io/v1beta1 kind: FrontendConfig metadata: name: <ingress_security_config> spec: sslPolicy: gke-ingress-ssl-policy-https redirectToHttps: enabled: true",
"global: host: <rhdh_domain_name> route: enabled: false upstream: service: type: NodePort ingress: enabled: true annotations: kubernetes.io/ingress.class: gce kubernetes.io/ingress.global-static-ip-name: <ADDRESS_NAME> networking.gke.io/managed-certificates: <rhdh_certificate_name> networking.gke.io/v1beta1.FrontendConfig: <ingress_security_config> className: gce backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: fsGroup: 2000 postgresql: image: pullSecrets: - rhdh-pull-secret primary: podSecurityContext: enabled: true fsGroup: 3000 volumePermissions: enabled: true",
"helm -n <your_namespace> install -f values.yaml <your_deploy_name> openshift-helm-charts/redhat-developer-hub --version 1.4.2",
"get deploy <you_deploy_name>-developer-hub -n <your_namespace>",
"get service -n <your_namespace> get ingress -n <your_namespace>",
"helm -n <your_namespace> upgrade -f values.yaml <your_deploy_name> openshift-helm-charts/redhat-developer-hub --version <UPGRADE_CHART_VERSION>",
"helm -n <your_namespace> delete <your_deploy_name>"
] |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/installing_red_hat_developer_hub_on_google_kubernetes_engine/index
|
Chapter 65. resource
|
Chapter 65. resource This chapter describes the commands under the resource command. 65.1. resource member create Shares a resource to another tenant. Usage: Table 65.1. Positional arguments Value Summary resource_id Resource id to be shared. resource_type Resource type. member_id Project id to whom the resource is shared to. Table 65.2. Command arguments Value Summary -h, --help Show this help message and exit Table 65.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 65.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 65.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.2. resource member delete Delete a resource sharing relationship. Usage: Table 65.7. Positional arguments Value Summary resource Resource id to be shared. resource_type Resource type. member_id Project id to whom the resource is shared to. Table 65.8. Command arguments Value Summary -h, --help Show this help message and exit 65.3. resource member list List all members. Usage: Table 65.9. Positional arguments Value Summary resource_id Resource id to be shared. resource_type Resource type. Table 65.10. Command arguments Value Summary -h, --help Show this help message and exit --marker [MARKER] The last execution uuid of the page, displays list of executions after "marker". --limit [LIMIT] Maximum number of entries to return in a single result. --sort_keys [SORT_KEYS] Comma-separated list of sort keys to sort results by. Default: created_at. Example: mistral execution-list --sort_keys=id,description --sort_dirs [SORT_DIRS] Comma-separated list of sort directions. default: asc. Example: mistral execution-list --sort_keys=id,description --sort_dirs=asc,desc --filter FILTERS Filters. can be repeated. Table 65.11. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 65.12. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 65.13. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 65.14. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.4. resource member show Show specific member information. Usage: Table 65.15. Positional arguments Value Summary resource Resource id to be shared. resource_type Resource type. Table 65.16. Command arguments Value Summary -h, --help Show this help message and exit -m MEMBER_ID, --member-id MEMBER_ID Project id to whom the resource is shared to. no need to provide this param if you are the resource member. Table 65.17. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 65.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 65.19. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.20. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 65.5. resource member update Update resource sharing status. Usage: Table 65.21. Positional arguments Value Summary resource_id Resource id to be shared. resource_type Resource type. Table 65.22. Command arguments Value Summary -h, --help Show this help message and exit -m MEMBER_ID, --member-id MEMBER_ID Project id to whom the resource is shared to. no need to provide this param if you are the resource member. -s {pending,accepted,rejected}, --status {pending,accepted,rejected} Status of the sharing. Table 65.23. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 65.24. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 65.25. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 65.26. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
|
[
"openstack resource member create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] resource_id resource_type member_id",
"openstack resource member delete [-h] resource resource_type member_id",
"openstack resource member list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--marker [MARKER]] [--limit [LIMIT]] [--sort_keys [SORT_KEYS]] [--sort_dirs [SORT_DIRS]] [--filter FILTERS] resource_id resource_type",
"openstack resource member show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-m MEMBER_ID] resource resource_type",
"openstack resource member update [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [-m MEMBER_ID] [-s {pending,accepted,rejected}] resource_id resource_type"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/resource
|
17.2. DistributedCallable API
|
17.2. DistributedCallable API The DistributedCallable interface is a subtype of the existing Callable from java.util.concurrent.package , and can be executed in a remote JVM and receive input from Red Hat JBoss Data Grid. The DistributedCallable interface is used to facilitate tasks that require access to JBoss Data Grid cache data. When using the DistributedCallable API to execute a task, the task's main algorithm remains unchanged, however the input source is changed. Users who have already implemented the Callable interface must extend DistributedCallable if access to the cache or the set of passed in keys is required. Example 17.1. Using the DistributedCallable API Report a bug
|
[
"public interface DistributedCallable<K, V, T> extends Callable<T> { /** * Invoked by execution environment after DistributedCallable * has been migrated for execution to a specific Infinispan node. * * @param cache * cache whose keys are used as input data for this * DistributedCallable task * @param inputKeys * keys used as input for this DistributedCallable task */ public void setEnvironment(Cache<K, V> cache, Set<K> inputKeys); }"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/distributedcallable_api
|
32.3.6. Displaying Open Files
|
32.3.6. Displaying Open Files To display information about open files, type the files command at the interactive prompt. You can use files pid to display files opened by the selected process. Example 32.7. Displaying information about open files of the current context Type help files for more information on the command usage.
|
[
"crash> files PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" ROOT: / CWD: /root FD FILE DENTRY INODE TYPE PATH 0 f734f640 eedc2c6c eecd6048 CHR /pts/0 1 efade5c0 eee14090 f00431d4 REG /proc/sysrq-trigger 2 f734f640 eedc2c6c eecd6048 CHR /pts/0 10 f734f640 eedc2c6c eecd6048 CHR /pts/0 255 f734f640 eedc2c6c eecd6048 CHR /pts/0"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-kdump-crash-files
|
function::usymdata
|
function::usymdata Name function::usymdata - Return the symbol and module offset of an address. Synopsis Arguments addr The address to translate. Description Returns the (function) symbol name associated with the given address in the current task if known, the offset from the start and the size of the symbol, plus the module name (between brackets). If symbol is unknown, but module is known, the offset inside the module, plus the size of the module is added. If any element is not known it will be omitted and if the symbol name is unknown it will return the hex string for the given address.
|
[
"usymdata:string(addr:long)"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-usymdata
|
Developing Kafka client applications
|
Developing Kafka client applications Red Hat Streams for Apache Kafka 2.5 Develop client applications to interact with Kafka using AMQ Streams
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/developing_kafka_client_applications/index
|
Chapter 21. DeploymentService
|
Chapter 21. DeploymentService 21.1. CountDeployments GET /v1/deploymentscount CountDeployments returns the number of deployments. 21.1.1. Description 21.1.2. Parameters 21.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 21.1.3. Return Type V1CountDeploymentsResponse 21.1.4. Content Type application/json 21.1.5. Responses Table 21.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountDeploymentsResponse 0 An unexpected error response. GooglerpcStatus 21.1.6. Samples 21.1.7. Common object reference 21.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 21.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 21.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 21.1.7.3. V1CountDeploymentsResponse Field Name Required Nullable Type Description Format count Integer int32 21.2. ListDeployments GET /v1/deployments ListDeployments returns the list of deployments. 21.2.1. Description 21.2.2. Parameters 21.2.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 21.2.3. Return Type V1ListDeploymentsResponse 21.2.4. Content Type application/json 21.2.5. Responses Table 21.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListDeploymentsResponse 0 An unexpected error response. GooglerpcStatus 21.2.6. Samples 21.2.7. Common object reference 21.2.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 21.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 21.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 21.2.7.3. StorageListDeployment Field Name Required Nullable Type Description Format id String hash String uint64 name String cluster String clusterId String namespace String created Date date-time priority String int64 21.2.7.4. V1ListDeploymentsResponse Field Name Required Nullable Type Description Format deployments List of StorageListDeployment 21.3. GetDeployment GET /v1/deployments/{id} GetDeployment returns a deployment given its ID. 21.3.1. Description 21.3.2. Parameters 21.3.2.1. Path Parameters Name Description Required Default Pattern id X null 21.3.3. Return Type StorageDeployment 21.3.4. Content Type application/json 21.3.5. Responses Table 21.3. HTTP Response Codes Code Message Datatype 200 A successful response. StorageDeployment 0 An unexpected error response. GooglerpcStatus 21.3.6. Samples 21.3.7. Common object reference 21.3.7.1. ContainerConfigEnvironmentConfig Field Name Required Nullable Type Description Format key String value String envVarSource EnvironmentConfigEnvVarSource UNSET, RAW, SECRET_KEY, CONFIG_MAP_KEY, FIELD, RESOURCE_FIELD, UNKNOWN, 21.3.7.2. EnvironmentConfigEnvVarSource Enum Values UNSET RAW SECRET_KEY CONFIG_MAP_KEY FIELD RESOURCE_FIELD UNKNOWN 21.3.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 21.3.7.4. PortConfigExposureInfo Field Name Required Nullable Type Description Format level PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, serviceName String serviceId String serviceClusterIp String servicePort Integer int32 nodePort Integer int32 externalIps List of string externalHostnames List of string 21.3.7.5. PortConfigExposureLevel Enum Values UNSET EXTERNAL NODE INTERNAL HOST ROUTE 21.3.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 21.3.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 21.3.7.7. SeccompProfileProfileType Enum Values UNCONFINED RUNTIME_DEFAULT LOCALHOST 21.3.7.8. SecurityContextSELinux Field Name Required Nullable Type Description Format user String role String type String level String 21.3.7.9. SecurityContextSeccompProfile Field Name Required Nullable Type Description Format type SeccompProfileProfileType UNCONFINED, RUNTIME_DEFAULT, LOCALHOST, localhostProfile String 21.3.7.10. StorageContainer Field Name Required Nullable Type Description Format id String config StorageContainerConfig image StorageContainerImage securityContext StorageSecurityContext volumes List of StorageVolume ports List of StoragePortConfig secrets List of StorageEmbeddedSecret resources StorageResources name String livenessProbe StorageLivenessProbe readinessProbe StorageReadinessProbe 21.3.7.11. StorageContainerConfig Field Name Required Nullable Type Description Format env List of ContainerConfigEnvironmentConfig command List of string args List of string directory String user String uid String int64 appArmorProfile String 21.3.7.12. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 21.3.7.13. StorageDeployment Field Name Required Nullable Type Description Format id String name String hash String uint64 type String namespace String namespaceId String orchestratorComponent Boolean replicas String int64 labels Map of string podLabels Map of string labelSelector StorageLabelSelector created Date date-time clusterId String clusterName String containers List of StorageContainer annotations Map of string priority String int64 inactive Boolean imagePullSecrets List of string serviceAccount String serviceAccountPermissionLevel StoragePermissionLevel UNSET, NONE, DEFAULT, ELEVATED_IN_NAMESPACE, ELEVATED_CLUSTER_WIDE, CLUSTER_ADMIN, automountServiceAccountToken Boolean hostNetwork Boolean hostPid Boolean hostIpc Boolean runtimeClass String tolerations List of StorageToleration ports List of StoragePortConfig stateTimestamp String int64 riskScore Float float platformComponent Boolean 21.3.7.14. StorageEmbeddedSecret Field Name Required Nullable Type Description Format name String path String 21.3.7.15. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 21.3.7.16. StorageLabelSelector available tag: 3 Field Name Required Nullable Type Description Format matchLabels Map of string This is actually a oneof, but we can't make it one due to backwards compatibility constraints. requirements List of StorageLabelSelectorRequirement 21.3.7.17. StorageLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 21.3.7.18. StorageLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 21.3.7.19. StorageLivenessProbe Field Name Required Nullable Type Description Format defined Boolean 21.3.7.20. StoragePermissionLevel Enum Values UNSET NONE DEFAULT ELEVATED_IN_NAMESPACE ELEVATED_CLUSTER_WIDE CLUSTER_ADMIN 21.3.7.21. StoragePortConfig Field Name Required Nullable Type Description Format name String containerPort Integer int32 protocol String exposure PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, exposedPort Integer int32 exposureInfos List of PortConfigExposureInfo 21.3.7.22. StorageReadinessProbe Field Name Required Nullable Type Description Format defined Boolean 21.3.7.23. StorageResources Field Name Required Nullable Type Description Format cpuCoresRequest Float float cpuCoresLimit Float float memoryMbRequest Float float memoryMbLimit Float float 21.3.7.24. StorageSecurityContext Field Name Required Nullable Type Description Format privileged Boolean selinux SecurityContextSELinux dropCapabilities List of string addCapabilities List of string readOnlyRootFilesystem Boolean seccompProfile SecurityContextSeccompProfile allowPrivilegeEscalation Boolean 21.3.7.25. StorageTaintEffect Enum Values UNKNOWN_TAINT_EFFECT NO_SCHEDULE_TAINT_EFFECT PREFER_NO_SCHEDULE_TAINT_EFFECT NO_EXECUTE_TAINT_EFFECT 21.3.7.26. StorageToleration Field Name Required Nullable Type Description Format key String operator StorageTolerationOperator TOLERATION_OPERATION_UNKNOWN, TOLERATION_OPERATOR_EXISTS, TOLERATION_OPERATOR_EQUAL, value String taintEffect StorageTaintEffect UNKNOWN_TAINT_EFFECT, NO_SCHEDULE_TAINT_EFFECT, PREFER_NO_SCHEDULE_TAINT_EFFECT, NO_EXECUTE_TAINT_EFFECT, 21.3.7.27. StorageTolerationOperator Enum Values TOLERATION_OPERATION_UNKNOWN TOLERATION_OPERATOR_EXISTS TOLERATION_OPERATOR_EQUAL 21.3.7.28. StorageVolume Field Name Required Nullable Type Description Format name String source String destination String readOnly Boolean type String mountPropagation VolumeMountPropagation NONE, HOST_TO_CONTAINER, BIDIRECTIONAL, 21.3.7.29. VolumeMountPropagation Enum Values NONE HOST_TO_CONTAINER BIDIRECTIONAL 21.4. GetLabels GET /v1/deployments/metadata/labels GetLabels returns the labels used by deployments. 21.4.1. Description 21.4.2. Parameters 21.4.3. Return Type V1DeploymentLabelsResponse 21.4.4. Content Type application/json 21.4.5. Responses Table 21.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1DeploymentLabelsResponse 0 An unexpected error response. GooglerpcStatus 21.4.6. Samples 21.4.7. Common object reference 21.4.7.1. DeploymentLabelsResponseLabelValues Field Name Required Nullable Type Description Format values List of string 21.4.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 21.4.7.3. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 21.4.7.3.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 21.4.7.4. V1DeploymentLabelsResponse Field Name Required Nullable Type Description Format labels Map of DeploymentLabelsResponseLabelValues values List of string 21.5. ListDeploymentsWithProcessInfo GET /v1/deploymentswithprocessinfo ListDeploymentsWithProcessInfo returns the list of deployments with process information. 21.5.1. Description 21.5.2. Parameters 21.5.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 21.5.3. Return Type V1ListDeploymentsWithProcessInfoResponse 21.5.4. Content Type application/json 21.5.5. Responses Table 21.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListDeploymentsWithProcessInfoResponse 0 An unexpected error response. GooglerpcStatus 21.5.6. Samples 21.5.7. Common object reference 21.5.7.1. ContainerNameAndBaselineStatusBaselineStatus NOT_GENERATED: In current implementation, this is a temporary condition. Enum Values INVALID NOT_GENERATED UNLOCKED LOCKED 21.5.7.2. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 21.5.7.3. ListDeploymentsWithProcessInfoResponseDeploymentWithProcessInfo Field Name Required Nullable Type Description Format deployment StorageListDeployment baselineStatuses List of StorageContainerNameAndBaselineStatus 21.5.7.4. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 21.5.7.4.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 21.5.7.5. StorageContainerNameAndBaselineStatus ContainerNameAndBaselineStatus represents a cached result of process evaluation on a specific container name. Field Name Required Nullable Type Description Format containerName String baselineStatus ContainerNameAndBaselineStatusBaselineStatus INVALID, NOT_GENERATED, UNLOCKED, LOCKED, anomalousProcessesExecuted Boolean 21.5.7.6. StorageListDeployment Field Name Required Nullable Type Description Format id String hash String uint64 name String cluster String clusterId String namespace String created Date date-time priority String int64 21.5.7.7. V1ListDeploymentsWithProcessInfoResponse Field Name Required Nullable Type Description Format deployments List of ListDeploymentsWithProcessInfoResponseDeploymentWithProcessInfo 21.6. GetDeploymentWithRisk GET /v1/deploymentswithrisk/{id} GetDeploymentWithRisk returns a deployment and its risk given its ID. 21.6.1. Description 21.6.2. Parameters 21.6.2.1. Path Parameters Name Description Required Default Pattern id X null 21.6.3. Return Type V1GetDeploymentWithRiskResponse 21.6.4. Content Type application/json 21.6.5. Responses Table 21.6. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetDeploymentWithRiskResponse 0 An unexpected error response. GooglerpcStatus 21.6.6. Samples 21.6.7. Common object reference 21.6.7.1. ContainerConfigEnvironmentConfig Field Name Required Nullable Type Description Format key String value String envVarSource EnvironmentConfigEnvVarSource UNSET, RAW, SECRET_KEY, CONFIG_MAP_KEY, FIELD, RESOURCE_FIELD, UNKNOWN, 21.6.7.2. EnvironmentConfigEnvVarSource Enum Values UNSET RAW SECRET_KEY CONFIG_MAP_KEY FIELD RESOURCE_FIELD UNKNOWN 21.6.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 21.6.7.4. PortConfigExposureInfo Field Name Required Nullable Type Description Format level PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, serviceName String serviceId String serviceClusterIp String servicePort Integer int32 nodePort Integer int32 externalIps List of string externalHostnames List of string 21.6.7.5. PortConfigExposureLevel Enum Values UNSET EXTERNAL NODE INTERNAL HOST ROUTE 21.6.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 21.6.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 21.6.7.7. ResultFactor Field Name Required Nullable Type Description Format message String url String 21.6.7.8. SeccompProfileProfileType Enum Values UNCONFINED RUNTIME_DEFAULT LOCALHOST 21.6.7.9. SecurityContextSELinux Field Name Required Nullable Type Description Format user String role String type String level String 21.6.7.10. SecurityContextSeccompProfile Field Name Required Nullable Type Description Format type SeccompProfileProfileType UNCONFINED, RUNTIME_DEFAULT, LOCALHOST, localhostProfile String 21.6.7.11. StorageContainer Field Name Required Nullable Type Description Format id String config StorageContainerConfig image StorageContainerImage securityContext StorageSecurityContext volumes List of StorageVolume ports List of StoragePortConfig secrets List of StorageEmbeddedSecret resources StorageResources name String livenessProbe StorageLivenessProbe readinessProbe StorageReadinessProbe 21.6.7.12. StorageContainerConfig Field Name Required Nullable Type Description Format env List of ContainerConfigEnvironmentConfig command List of string args List of string directory String user String uid String int64 appArmorProfile String 21.6.7.13. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 21.6.7.14. StorageDeployment Field Name Required Nullable Type Description Format id String name String hash String uint64 type String namespace String namespaceId String orchestratorComponent Boolean replicas String int64 labels Map of string podLabels Map of string labelSelector StorageLabelSelector created Date date-time clusterId String clusterName String containers List of StorageContainer annotations Map of string priority String int64 inactive Boolean imagePullSecrets List of string serviceAccount String serviceAccountPermissionLevel StoragePermissionLevel UNSET, NONE, DEFAULT, ELEVATED_IN_NAMESPACE, ELEVATED_CLUSTER_WIDE, CLUSTER_ADMIN, automountServiceAccountToken Boolean hostNetwork Boolean hostPid Boolean hostIpc Boolean runtimeClass String tolerations List of StorageToleration ports List of StoragePortConfig stateTimestamp String int64 riskScore Float float platformComponent Boolean 21.6.7.15. StorageEmbeddedSecret Field Name Required Nullable Type Description Format name String path String 21.6.7.16. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 21.6.7.17. StorageLabelSelector available tag: 3 Field Name Required Nullable Type Description Format matchLabels Map of string This is actually a oneof, but we can't make it one due to backwards compatibility constraints. requirements List of StorageLabelSelectorRequirement 21.6.7.18. StorageLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 21.6.7.19. StorageLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 21.6.7.20. StorageLivenessProbe Field Name Required Nullable Type Description Format defined Boolean 21.6.7.21. StoragePermissionLevel Enum Values UNSET NONE DEFAULT ELEVATED_IN_NAMESPACE ELEVATED_CLUSTER_WIDE CLUSTER_ADMIN 21.6.7.22. StoragePortConfig Field Name Required Nullable Type Description Format name String containerPort Integer int32 protocol String exposure PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, exposedPort Integer int32 exposureInfos List of PortConfigExposureInfo 21.6.7.23. StorageReadinessProbe Field Name Required Nullable Type Description Format defined Boolean 21.6.7.24. StorageResources Field Name Required Nullable Type Description Format cpuCoresRequest Float float cpuCoresLimit Float float memoryMbRequest Float float memoryMbLimit Float float 21.6.7.25. StorageRisk Field Name Required Nullable Type Description Format id String subject StorageRiskSubject score Float float results List of StorageRiskResult 21.6.7.26. StorageRiskResult Field Name Required Nullable Type Description Format name String factors List of ResultFactor score Float float 21.6.7.27. StorageRiskSubject Field Name Required Nullable Type Description Format id String namespace String clusterId String type StorageRiskSubjectType UNKNOWN, DEPLOYMENT, NAMESPACE, CLUSTER, NODE, NODE_COMPONENT, IMAGE, IMAGE_COMPONENT, SERVICEACCOUNT, 21.6.7.28. StorageRiskSubjectType Enum Values UNKNOWN DEPLOYMENT NAMESPACE CLUSTER NODE NODE_COMPONENT IMAGE IMAGE_COMPONENT SERVICEACCOUNT 21.6.7.29. StorageSecurityContext Field Name Required Nullable Type Description Format privileged Boolean selinux SecurityContextSELinux dropCapabilities List of string addCapabilities List of string readOnlyRootFilesystem Boolean seccompProfile SecurityContextSeccompProfile allowPrivilegeEscalation Boolean 21.6.7.30. StorageTaintEffect Enum Values UNKNOWN_TAINT_EFFECT NO_SCHEDULE_TAINT_EFFECT PREFER_NO_SCHEDULE_TAINT_EFFECT NO_EXECUTE_TAINT_EFFECT 21.6.7.31. StorageToleration Field Name Required Nullable Type Description Format key String operator StorageTolerationOperator TOLERATION_OPERATION_UNKNOWN, TOLERATION_OPERATOR_EXISTS, TOLERATION_OPERATOR_EQUAL, value String taintEffect StorageTaintEffect UNKNOWN_TAINT_EFFECT, NO_SCHEDULE_TAINT_EFFECT, PREFER_NO_SCHEDULE_TAINT_EFFECT, NO_EXECUTE_TAINT_EFFECT, 21.6.7.32. StorageTolerationOperator Enum Values TOLERATION_OPERATION_UNKNOWN TOLERATION_OPERATOR_EXISTS TOLERATION_OPERATOR_EQUAL 21.6.7.33. StorageVolume Field Name Required Nullable Type Description Format name String source String destination String readOnly Boolean type String mountPropagation VolumeMountPropagation NONE, HOST_TO_CONTAINER, BIDIRECTIONAL, 21.6.7.34. V1GetDeploymentWithRiskResponse Field Name Required Nullable Type Description Format deployment StorageDeployment risk StorageRisk 21.6.7.35. VolumeMountPropagation Enum Values NONE HOST_TO_CONTAINER BIDIRECTIONAL 21.7. ExportDeployments GET /v1/export/deployments 21.7.1. Description 21.7.2. Parameters 21.7.2.1. Query Parameters Name Description Required Default Pattern timeout - null query - null 21.7.3. Return Type Stream_result_of_v1ExportDeploymentResponse 21.7.4. Content Type application/json 21.7.5. Responses Table 21.7. HTTP Response Codes Code Message Datatype 200 A successful response.(streaming responses) Stream_result_of_v1ExportDeploymentResponse 0 An unexpected error response. GooglerpcStatus 21.7.6. Samples 21.7.7. Common object reference 21.7.7.1. ContainerConfigEnvironmentConfig Field Name Required Nullable Type Description Format key String value String envVarSource EnvironmentConfigEnvVarSource UNSET, RAW, SECRET_KEY, CONFIG_MAP_KEY, FIELD, RESOURCE_FIELD, UNKNOWN, 21.7.7.2. EnvironmentConfigEnvVarSource Enum Values UNSET RAW SECRET_KEY CONFIG_MAP_KEY FIELD RESOURCE_FIELD UNKNOWN 21.7.7.3. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 21.7.7.4. PortConfigExposureInfo Field Name Required Nullable Type Description Format level PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, serviceName String serviceId String serviceClusterIp String servicePort Integer int32 nodePort Integer int32 externalIps List of string externalHostnames List of string 21.7.7.5. PortConfigExposureLevel Enum Values UNSET EXTERNAL NODE INTERNAL HOST ROUTE 21.7.7.6. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 21.7.7.6.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 21.7.7.7. SeccompProfileProfileType Enum Values UNCONFINED RUNTIME_DEFAULT LOCALHOST 21.7.7.8. SecurityContextSELinux Field Name Required Nullable Type Description Format user String role String type String level String 21.7.7.9. SecurityContextSeccompProfile Field Name Required Nullable Type Description Format type SeccompProfileProfileType UNCONFINED, RUNTIME_DEFAULT, LOCALHOST, localhostProfile String 21.7.7.10. StorageContainer Field Name Required Nullable Type Description Format id String config StorageContainerConfig image StorageContainerImage securityContext StorageSecurityContext volumes List of StorageVolume ports List of StoragePortConfig secrets List of StorageEmbeddedSecret resources StorageResources name String livenessProbe StorageLivenessProbe readinessProbe StorageReadinessProbe 21.7.7.11. StorageContainerConfig Field Name Required Nullable Type Description Format env List of ContainerConfigEnvironmentConfig command List of string args List of string directory String user String uid String int64 appArmorProfile String 21.7.7.12. StorageContainerImage Field Name Required Nullable Type Description Format id String name StorageImageName notPullable Boolean isClusterLocal Boolean 21.7.7.13. StorageDeployment Field Name Required Nullable Type Description Format id String name String hash String uint64 type String namespace String namespaceId String orchestratorComponent Boolean replicas String int64 labels Map of string podLabels Map of string labelSelector StorageLabelSelector created Date date-time clusterId String clusterName String containers List of StorageContainer annotations Map of string priority String int64 inactive Boolean imagePullSecrets List of string serviceAccount String serviceAccountPermissionLevel StoragePermissionLevel UNSET, NONE, DEFAULT, ELEVATED_IN_NAMESPACE, ELEVATED_CLUSTER_WIDE, CLUSTER_ADMIN, automountServiceAccountToken Boolean hostNetwork Boolean hostPid Boolean hostIpc Boolean runtimeClass String tolerations List of StorageToleration ports List of StoragePortConfig stateTimestamp String int64 riskScore Float float platformComponent Boolean 21.7.7.14. StorageEmbeddedSecret Field Name Required Nullable Type Description Format name String path String 21.7.7.15. StorageImageName Field Name Required Nullable Type Description Format registry String remote String tag String fullName String 21.7.7.16. StorageLabelSelector available tag: 3 Field Name Required Nullable Type Description Format matchLabels Map of string This is actually a oneof, but we can't make it one due to backwards compatibility constraints. requirements List of StorageLabelSelectorRequirement 21.7.7.17. StorageLabelSelectorOperator Enum Values UNKNOWN IN NOT_IN EXISTS NOT_EXISTS 21.7.7.18. StorageLabelSelectorRequirement Field Name Required Nullable Type Description Format key String op StorageLabelSelectorOperator UNKNOWN, IN, NOT_IN, EXISTS, NOT_EXISTS, values List of string 21.7.7.19. StorageLivenessProbe Field Name Required Nullable Type Description Format defined Boolean 21.7.7.20. StoragePermissionLevel Enum Values UNSET NONE DEFAULT ELEVATED_IN_NAMESPACE ELEVATED_CLUSTER_WIDE CLUSTER_ADMIN 21.7.7.21. StoragePortConfig Field Name Required Nullable Type Description Format name String containerPort Integer int32 protocol String exposure PortConfigExposureLevel UNSET, EXTERNAL, NODE, INTERNAL, HOST, ROUTE, exposedPort Integer int32 exposureInfos List of PortConfigExposureInfo 21.7.7.22. StorageReadinessProbe Field Name Required Nullable Type Description Format defined Boolean 21.7.7.23. StorageResources Field Name Required Nullable Type Description Format cpuCoresRequest Float float cpuCoresLimit Float float memoryMbRequest Float float memoryMbLimit Float float 21.7.7.24. StorageSecurityContext Field Name Required Nullable Type Description Format privileged Boolean selinux SecurityContextSELinux dropCapabilities List of string addCapabilities List of string readOnlyRootFilesystem Boolean seccompProfile SecurityContextSeccompProfile allowPrivilegeEscalation Boolean 21.7.7.25. StorageTaintEffect Enum Values UNKNOWN_TAINT_EFFECT NO_SCHEDULE_TAINT_EFFECT PREFER_NO_SCHEDULE_TAINT_EFFECT NO_EXECUTE_TAINT_EFFECT 21.7.7.26. StorageToleration Field Name Required Nullable Type Description Format key String operator StorageTolerationOperator TOLERATION_OPERATION_UNKNOWN, TOLERATION_OPERATOR_EXISTS, TOLERATION_OPERATOR_EQUAL, value String taintEffect StorageTaintEffect UNKNOWN_TAINT_EFFECT, NO_SCHEDULE_TAINT_EFFECT, PREFER_NO_SCHEDULE_TAINT_EFFECT, NO_EXECUTE_TAINT_EFFECT, 21.7.7.27. StorageTolerationOperator Enum Values TOLERATION_OPERATION_UNKNOWN TOLERATION_OPERATOR_EXISTS TOLERATION_OPERATOR_EQUAL 21.7.7.28. StorageVolume Field Name Required Nullable Type Description Format name String source String destination String readOnly Boolean type String mountPropagation VolumeMountPropagation NONE, HOST_TO_CONTAINER, BIDIRECTIONAL, 21.7.7.29. StreamResultOfV1ExportDeploymentResponse Field Name Required Nullable Type Description Format result V1ExportDeploymentResponse error GooglerpcStatus 21.7.7.30. V1ExportDeploymentResponse Field Name Required Nullable Type Description Format deployment StorageDeployment 21.7.7.31. VolumeMountPropagation Enum Values NONE HOST_TO_CONTAINER BIDIRECTIONAL
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 9",
"For any update to EnvVarSource, please also update 'ui/src/messages/common.js'",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next tag: 12",
"Next available tag: 36",
"Label selector components are joined with logical AND, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"Next available tag: 4",
"For any update to PermissionLevel, also update: - pkg/searchbasedpolicies/builders/k8s_rbac.go - ui/src/messages/common.js",
"Next Available Tag: 6",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next available tag: 9",
"For any update to EnvVarSource, please also update 'ui/src/messages/common.js'",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next tag: 12",
"Next available tag: 36",
"Label selector components are joined with logical AND, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"Next available tag: 4",
"For any update to PermissionLevel, also update: - pkg/searchbasedpolicies/builders/k8s_rbac.go - ui/src/messages/common.js",
"Next Available Tag: 6",
"Next tag: 9",
"For any update to EnvVarSource, please also update 'ui/src/messages/common.js'",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"Next tag: 12",
"Next available tag: 36",
"Label selector components are joined with logical AND, see https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/",
"Next available tag: 4",
"For any update to PermissionLevel, also update: - pkg/searchbasedpolicies/builders/k8s_rbac.go - ui/src/messages/common.js",
"Next Available Tag: 6",
"Stream result of v1ExportDeploymentResponse"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/deploymentservice
|
Chapter 23. Installation Phase 3: Installing Using Anaconda
|
Chapter 23. Installation Phase 3: Installing Using Anaconda This chapter describes an installation using the graphical user interface of anaconda . 23.1. The Non-interactive Line-Mode Text Installation Program Output If the cmdline option was specified as boot option in your parameter file (Refer to Section 26.6, "Parameters for Kickstart Installations" ) or in your kickstart file (refer to Chapter 32, Kickstart Installations ), anaconda starts with line-mode oriented text output. In this mode, all necessary information must be provided in the kickstart file. The installer will not allow user interaction and stops if there is unspecified installation information.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-guimode-s390
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/logging_monitoring_and_troubleshooting_guide/proc_providing-feedback-on-red-hat-documentation
|
Chapter 3. Customizing the Block Storage service (cinder)
|
Chapter 3. Customizing the Block Storage service (cinder) Once the Block Storage service (cinder) is deployed, it can be further customized for your environment. Block Storage service customizations include: Quotas: you can define project-specific limits to constrain the use of Block Storage resources. Volume types: you can define the associated settings of each volume type. By default, all volume types are public and available to all projects (tenants) but you can create a private volume type to control or limit the access to it. Volume types allow you to provide different levels of volume performance for your users. You can include the following additional customization of your volume types: The back end that a volume uses: you can create a volume type for each supported back end. However, for volume types that do not specify a back end, the Block Storage scheduler selects the most appropriate back end. Configurable properties of the Block Storage back-end drivers: you can specify multiple back-end properties for the volume type by using key-value pairs called Extra Specs. Quality of Service (QoS) specifications: you can apply performance limits to volumes that users create by associating QoS specifications to each volume type. Volume encryption: you can create an encrypted volume type so that users can create encrypted volumes. Default volume types: you can specify which volume type is used when a cloud user creates a volume and does not specify a volume type. Some Block Storage features require the creation of an internal Block Storage project or tenant, that is also known as the service project, which is called cinder-internal . Manage or unmanage volumes and their snapshots: you can import volumes and their snapshots from other Block Storage volume services or remove volumes and their snapshots from this Block Storage volume service. Note All of these customizing procedures use the CLI which is faster, requires less setup, and provides more options than the Dashboard. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. To use Block Storage service cinder CLI commands, source the cloudrc file with the command USD source ./cloudrc before using them. If the cloudrc file does not exist, then you need to create it. For more information, see Creating the cloudrc file . 3.1. Viewing and modifying project quotas You can change or view the limits of the following Block Storage resource quotas for each project (tenant): volumes , the number of volumes allowed for each project. The default value is 10 . snapshots , the number of snapshots allowed for each project. The default value is 10 . gigabytes , the total amount of storage allowed for volumes for each project in gigabytes, this might also include the storage allowed for snapshots. The default values, also count the size of the snapshots against this limit of 1000 GB. Note The default value of the no_snapshot_gb_quota Block Storage initial parameter value includes the storage allocated for snapshots against the gigabytes quota . For more information, see Configuring initial Block Storage service defaults in Configuring persistent storage . per-volume-gigabytes , the maximum size of each volume in gigabytes. The default is -1 , which means that this quota is unspecified. You can obtain the usage of these Block Storage resource quotas for each project. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Note If the cloudrc file does not exist, then type in the exit command and create this file. For more information, see Creating the cloudrc file . Optional: List the projects to obtain the ID or name of the required project: View the current limits of the Block Storage resource quotas for a specific project: Replace <project> with the ID or name of the required project. This displays the limits of all the resource quotas for the specified project. Each volume type for this project, provides its own volumes, gigabytes, and, snapshots quotas that are unspecified by default, which means that they have a Limit of -1 . These 3 volume type quotas are suffixed by the volume type name. For instance the default volume type __DEFAULT__ , has the following associated quotas: volumes___DEFAULT__ , gigabytes___DEFAULT__ , and snapshots___DEFAULT__ . Optional: Modify the total size of all the Block Storage volumes, and the default value of the no_snapshot_gb_quota Block Storage initial parameter includes the size of the snapshots, that users can create for a project, in this amount: Replace <totalgb> with the total size, in gigabytes, of the Block Storage volumes, and if applicable the snapshots, that users can create for this project. Optional: Modify the maximum number of the Block Storage volumes that users can create for a project: Replace <maxvolumes> with the maximum number of volumes that users can create for this project. Optional: Modify the maximum size of the Block Storage volumes that users can create for a project: Replace <maxsize> with the maximum size, in gigabytes, of any Block Storage volume that users can create for this project. Optional: Modify the maximum number of the Block Storage volume snapshots that users can create for a project: Replace <maxsnapshots> with the maximum number of snapshots that users can create for this project. Optional: View the usage of these Block Storage resource quotas and, if necessary, review any changes to their limits for a specific project: Replace <project_id> with the ID of the project. Locate the relevant rows in this table that specify the Block Storage resource quotas for the specified project. This table also includes the quota defined for each volume type. Look at the following columns in this table: The In_use column indicates how much of each resource has been used. The Limit column indicates whether the quota limits have been adjusted from their default settings. In this example all the quota limits are adjusted. Exit the openstackclient pod: USD exit 3.2. Creating and configuring a volume type You can create volume types so that you can apply associated settings to each volume type. For instance, you can create volume types to provide different levels of performance for your cloud users. Create a volume type and add specific performance, resilience, and other Extra Specs as key-value pairs. Note By default, all volume types are public and accessible to all projects. If you need to create volume types with restricted access, then create a private volume type. For more information, see Creating and using private volume types . For smaller deployments that have a limited number of back ends, you can create volume types that target each back end by using the volume_backend_name property. In this case, volumes that use these volume types are scheduled on the specified back ends. For more information, see the example provided by Editing a volume type . For large deployments with large numbers of back ends this may not be a viable strategy. In this case it is better that the Block Storage scheduler determines the best back end for a volume, based on the configured scheduler filters. In addition to adding performance related properties, after creating a volume type, you can associate it with a Quality of Service specification that further configures volume performance. For more information, see Block Storage service (cinder) Quality of Service specifications . Prerequisites You must be a project administrator to create and configure volume types. Know the names of the keys and their required values for the back-end driver capabilities that you want to add to your volume types. For more information, see Listing back-end driver capabilities . Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create and configure a volume type. Add the required properties to the volume type, by specifying a separate --property <key>=<value> argument, as follows: Replace <key> with the property key. Important Ensure that you spell the <key> correctly. Any incorrectly spelled keys are added but will not work. Replace <value> with the required value of the <key> . Replace <volume_type_name> with the name of your volume type. For example: Exit the openstackclient pod: USD exit steps Associating a QoS specification with a volume type 3.2.1. Creating and using private volume types By default, all volume types are public and available to all projects (tenants). To control or to limit the access to a volume type, you can create a private volume type. Note By default, private volume types are only accessible to their creators. But all administrative users can view private volume types. Private volume types restrict access to volumes with certain attributes. Typically, these are settings that should only be usable by specific projects. For instance, new back ends or ultra-high performance configurations that are being tested. Prerequisites You must be a project administrator to create and configure volume types. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a new private volume type. Replace <volume_type_name> with the name for the new private volume type. You can confirm that this is a private volume because the is_public field is set to False . For example: Optional: View private and public volume types. This provides a table of the volume types and the Is Public column indicates whether a volume type is private (False) or public (True). For example: This table also provides the name and ID of both public and private volume types, because you need the ID of the volume type to provide access to it. Optional: List the projects to obtain the ID of the required project: (Optional) Allow a project to access a private volume type: Replace <type_id> with the ID of the required private volume type. Replace <project_id> with the ID of the project that should access this private volume type. Note Access to a private volume type is granted at the project level. If you only know the name of a user of this project, then run the openstack user list command, which lists the name and tenant ID of all the configured users. (Optional) View which projects have access to a private volume type: The access_project_ids field of the resultant table provides the IDs of all the projects that can access this private volume type. (Optional) Remove project access from a private volume type: You can confirm this by running openstack volume type show <type_id> and ensuring this project is not specified in the access_project_ids field. Exit the openstackclient pod: USD exit 3.2.2. Listing back-end driver capabilities When creating volume types, the configurable properties or capabilities of the Block Storage back-end drivers are exposed and configured using key-value pairs called Extra Specs. Each back-end driver supports their own set of Extra Specs. For more information on the specific Extra Specs that a driver supports, see the back-end driver documentation. However, admins can always query the host of the Block Storage cinder-volume service directly to list the well-defined standard capabilities of its back-end driver. Prerequisites You must be a project administrator to query the Block Storage host directly. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Determine the host of the cinder-volume service: This command lists the properties of each Block Storage service ( cinder-backup , cinder-scheduler , and cinder-volume ). For example: The Host column specifies the host of each Block Storage service. However, the Host column of the cinder-volume service also provides the back end name by using this syntax: host@volume_back_end_name . Display the back-end driver capabilities of the Block Storage cinder-volume service: Replace <volsvchost> with the host of cinder-volume , provided in the table above. For example: The Key column provides the Extra Spec properties that you can set, while the Type column provides the required data type or valid values for these properties. Exit the openstackclient pod: USD exit 3.2.3. Editing a volume type You can edit a volume type by adding more properties to it or modifying the values of the existing properties. Note You can only edit a volume type if it is not in use. You can also delete volume types that you no longer need by using the openstack volume type delete and providing the name of the volume type. Prerequisites You must be a project administrator to edit volume types. Know the names of the keys and their required values for the back-end driver capabilities that you want to add to your volume types. For more information, see Listing back-end driver capabilities . Procedure Access the remote shell for the OpenStackClient pod from your workstation: Edit a volume type. You can use the openstack volume type list command to provide the list of configured volume types. Add or edit the properties of this volume type, by specifying a separate --property <key>=<value> argument, as follows: Replace <key> with the property key. Important Ensure that you spell the <key> correctly. Any incorrectly spelled keys are added but will not work. Replace <value> with the required value of the <key> . Replace <existing_volume_type_name> with the name of the required volume type. For example: If you wanted to configure a volume type to target a specific back end, you could add the following property to it: Note You can use the openstack volume backend pool list command to display a list of available back-end names, which uses this syntax: host@volume_backend_name#pool . Optional: This command does not provide any confirmation that the changes to the properties were successful. You can run the following command to review any changes made to this volume type: This command provides a table of the configuration details of this volume type and the properties field displays all the configured properties. For example: Exit the openstackclient pod: USD exit 3.3. Block Storage service (cinder) Quality of Service specifications You can apply performance limits to volumes that your cloud users create, by creating and associating Quality of Service (QoS) specifications to each volume type. For example, volumes that use higher performance QoS specifications could provide your users with more IOPS or users could assign lighter workloads to volumes that use lower performance QoS specifications to conserve resources. To create a QoS specification and associate it with a volume type, complete the following tasks: Create and configure the QoS specification . When you create a QoS specification you must choose the required consumer. The consumer determines where you want to apply the QoS limits and determines which QoS property keys are available to define the QoS limits. For more information about the available consumers, see Consumers of QoS specifications . You can create volume performance limits by setting the required QoS property keys to your deployment specific values. For more information on the QoS property keys provided by the Block Storage service (cinder), see Block Storage QoS property keys . Associating a QoS specification with a volume type . You can create, configure, and associate a QoS specification to a volume type by using the CLI. 3.3.1. Consumers of QoS specifications When you create a QoS specification you must choose the required consumer. The consumer determines where you want to apply the QoS limits and determines which QoS property keys are available to define the QoS limits. The Block Storage service (cinder) supports the following consumers of QoS specifications: front-end : The Compute service (nova) applies the QoS limits when the volume is attached to an instance. The Compute service supports all the QoS property keys provided by the Block Storage service. back-end : The back-end driver of the associated volume type applies the QoS limits. Each back-end driver supports their own set of QoS property keys. For more information on which QoS property keys the driver supports, see the back-end driver documentation. You would use the back-end consumer in cases where the front-end consumer is not supported. For instance, when attaching volumes to bare metal nodes through the Bare Metal Provisioning service (ironic). both : Both consumers apply the QoS limits, where possible. This consumer type therefore supports the following QoS property keys: When a volume is attached to an instance, then you can use every QoS property key that both the Compute service and the back-end driver supports. When the volume is not attached to an instance, then you can only use the QoS property keys that the back-end driver supports. 3.3.2. Block Storage QoS property keys The Block Storage service provides you with QoS property keys so that you can limit the performance of the volumes that your cloud users create. These limits use the following two industry standard measurements of storage volume performance: Input/output operations per second (IOPS) Data transfer rate, measured in bytes per second The consumer of the QoS specification determines which QoS property keys are supported. For more information, see Consumers of QoS specifications . Block Storage cannot perform error checking of QoS property keys, because some QoS property keys are defined externally by back-end drivers. Therefore, Block Storage ignores any invalid or unsupported QoS property key. Important Ensure that you spell the QoS property keys correctly. The volume performance limits that contain incorrectly spelled property keys are ignored. For both the IOPS and data transfer rate measurements, you can configure the following performance limits: Fixed limits Typically, fixed limits should define the average usage of the volume performance measurement. Burst limits Typically, burst limits should define periods of intense activity of the volume performance measurement. A burst limit makes allowance for an increased rate of activity for a specific time, while keeping the fixed limits low for average usage. Note The burst limits all use a burst length of 1 second. Total limits Specify a global limit for both the read and write operations of the required performance limit, by using the total_* QoS property key. Note Instead of using a total limit you can apply separate limits to the read and write operations or choose to limit only the read or write operations. Read limits Specify a limit that only applies to the read operations of the required performance limit, by using the read_* QoS property key. Note This limit is ignored when you specify a total limit. Write limits Specify a limit that only applies to the write operations of the required performance limit, by using the write_* QoS property key. Note This limit is ignored when you specify a total limit. You can use the following Block Storage QoS property keys to create volume performance limits for your deployment: Note The default value for all QoS property keys is 0 , which means that the limit is unrestricted. Table 3.1. Block Storage QoS property keys Performance limit Measurement unit QoS property keys Fixed IOPS IOPS total_iops_sec read_iops_sec write_iops_sec Fixed IOPS calculated by the size of the volume. For more information about the usage restrictions of these limits, see QoS limits that scale according to volume size . IOPS per GB total_iops_sec_per_gb read_iops_sec_per_gb write_iops_sec_per_gb Burst IOPS IOPS total_iops_sec_max read_iops_sec_max write_iops_sec_max Fixed data transfer rate Bytes per second total_bytes_sec read_bytes_sec write_bytes_sec Burst data transfer rate Bytes per second total_bytes_sec_max read_bytes_sec_max write_bytes_sec_max Size of an IO request when calculating IOPS limits. For more information, see Set the IO request size for IOPS limits . Bytes size_iops_sec 3.3.2.1. Set the IO request size for IOPS limits If you implement IOPS volume performance limits, you should also specify the typical IO request size to prevent users from circumventing these limits. If you do not then users could submit several large IO requests instead of a lot of smaller ones. Use the size_iops_sec QoS property key to specify the maximum size, in bytes, of a typical IO request. The Block Storage service uses this size to calculate the proportional number of typical IO requests for each IO request that is submitted, for example: size_iops_sec=4096 An 8 KB request is counted as two requests. A 6 KB request is counted as one and a half requests. Any request less than 4 KB is counted as one request. The Block Storage service only uses this IO request size limit when calculating IOPS limits. Note The default value of size_iops_sec is 0 , which ignores the size of IO requests when applying IOPS limits. 3.3.2.2. IOPS limits that scale according to volume size You can create IOPS volume performance limits that are determined by the capacity of the volumes that your users create. These Quality of Service (QoS) limits scale with the size of the provisioned volumes. For example, if the volume type has an IOPS limit of 500 per GB of volume size for read operations, then a provisioned 3 GB volume of this volume type would have a read IOPS limit of 1500. Important The size of the volume is determined when the volume is attached to an instance. Therefore if the size of the volume is changed while it is attached to an instance, these limits are only recalculated for the new volume size when this volume is detached and then reattached to an instance. You can use the following QoS property keys, specified in IOPS per GB, to create scalable volume performance limits: total_iops_sec_per_gb : Specify a global IOPS limit per GB of volume size for both the read and write operations. Note Instead of using a total limit you can apply separate limits to the read and write operations or choose to limit only the read or write operations. read_iops_sec_per_gb : Specify a IOPS limit per GB of volume size that only applies to the read operations. Note This limit is ignored when you specify a total limit. write_iops_sec_per_gb : Specify a IOPS limit per GB of volume size that only applies to the write operations. Note This limit is ignored when you specify a total limit. Important The consumer of the QoS specification containing these QoS limits can either be front-end or both , but not back-end . For more information, see Consumers of QoS specifications . 3.3.3. Creating and configuring a QoS specification A Quality of Service (QoS) specification is a list of volume performance QoS limits. You create each QoS limit by setting a QoS property key to your deployment specific value. To apply the QoS performance limits to a volume, you must associate the QoS specification with the required volume type. Prerequisites You must be a project administrator to create and configure QoS specifications. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create the QoS specification: Add the performance limits to the QoS specification, by specifying a separate --property <key>=<value> argument for each QoS limit, as follows: Replace <key> with the QoS property key of the required performance constraint. For more information, see Block Storage QoS property keys . Important Ensure that you spell the QoS property keys correctly. The volume performance limits that contain incorrectly spelled property keys are ignored. Replace <value> with your deployment-specific limit for this performance constraint, in the measurement unit required by the QoS property key. Optional: Replace <qos_spec_consumer> with the required consumer of this QoS specification. If not specified, the consumer defaults to both . For more information, see Consumers of QoS specifications . Replace <qos_spec_name> with the name of your QoS specification. Example: Exit the openstackclient pod: USD exit Edit an existing QoS specification Access the remote shell for the OpenStackClient pod from your workstation: Configure a created QoS specification to add other performance limits or to change the limits of the existing properties. You can add one or more --property <key>=<value> arguments. This command does not provide any confirmation that the changes to the properties were successful. You can run the following command to review any changes made to any QoS specification: This command provides a table of the configuration details of all the configured QoS specifications, the Properties column displays all the configured properties of each QoS specification. Exit the openstackclient pod: USD exit steps Associating a QoS specification with a volume type 3.3.4. Associating a QoS specification with a volume type You must associate a Quality of Service (QoS) specification with a volume type to apply the QoS limits to volumes. Important If a volume is already attached to an instance, then the QoS limits are only applied to this volume when the volume is detached and then reattached to this instance. Prerequisites The required volume type is created. For more information, see Creating and configuring a volume type . The required QoS specification is created. For more information, see Creating and configuring a QoS specification . Procedure Access the remote shell for the OpenStackClient pod from your workstation: Associate the required QoS specification with the required volume type: Replace <qos_spec_name> with the name or ID of the QoS specification. You can run the openstack volume qos list command to list the name and ID of all the QoS specifications. Replace <volume_type> with the name or ID of the volume type. You can run the openstack volume type list command to list the name and ID of all the volume types. Verify that the QoS specification has been associated: The Associations column of the output table provides the names of the volume types that are associated with each QoS specification. For example: Exit the openstackclient pod: USD exit 3.3.5. Disassociating a QoS specification from volume types You can disassociate a Quality of Service (QoS) specification from a volume type when you no longer want the QoS limits to be applied to volumes of that volume type. You can either disassociate a specific volume type, or all the volumes types when more than one volume type is associated to the same QoS specification. Important If a volume is already attached to an instance, then the QoS limits are only removed from this volume when the volume is detached and then reattached to this instance. Prerequisites You must be a project administrator to create, configure, associate, and disassociate QoS specifications. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Disassociate the volume types associated with the QoS specification: To disassociate a specific volume type associated with the QoS specification: Replace <qos_spec_name> with the name or ID of the QoS specification. You can run the openstack volume qos list command to list the name and ID of all the QoS specifications. Replace <volume_type> with the name or ID of the volume type associated with this QoS specification. You can run the cinder type-list command to list the name and ID of all the volume types. To disassociate all volume types associated with the QoS specification: Verify that the QoS specification has been disassociated: The Associations column of this QoS specification should either not contain the specified volume type or should be empty when all the volume types were disassociated. Exit the openstackclient pod: USD exit 3.4. Configuring Block Storage volume encryption To create encrypted volumes, you need an encrypted volume type that specifies the encryption provider, cipher, key size, and control location. Prerequisites You must be a project administrator to create encrypted volume types. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create an encrypted volume type: Replace <provider_name> with the luks encryption provider. Replace <cipher_name> with the aes-xts-plain64 encryption cipher. Replace <key_size> with the 256 encryption key size. Replace <control_location> with front-end , so the Compute service (nova) performs the encryption when the volume is attached a Compute node. Replace <encrypted_volume_type_name> with the name to use for the encrypted volume type. For example: The resultant table for an encrypted volume type provides the additional encryption field, which provides the encryption_id . In this example the encryption field has the following value: cipher='aes-xts-plain64', control_location='front-end', encryption_id='039c8b02-2e15-41fa-899d-6e2a555b7fc8', key_size='256', provider='luks' Exit the openstackclient pod: USD exit steps Create an encrypted volume from your encrypted volume type. 3.5. Configuring Block Storage volumes that can be attached to multiple instances at the same time To attach a Block Storage (cinder) volume to multiple instances at the same time, you must create this volume from or retype this volume to a multi-attach volume type. These multiple instances can then simultaneously read data from and write data to this volume. Warning You must use a multi-attach or cluster-aware file system to manage write operations from multiple instances. Failure to do so causes data corruption. For example, you can create a multi-attach volume on a NFS back end and then attach this volume to multiple instances that can use this volume as shared storage. Limitations of multi-attach volumes Your Block Storage (cinder) back-end driver must support multi-attach volumes. The Ceph RBD driver is supported. Contact Red Hat support to verify that the multiattach volume property is supported by your vendor plugin. For more information about the certification of your vendor plugin, see https://catalog.redhat.com/search?searchType=software&target_platforms=Red%20Hat%20OpenStack%20Platform&p=1&subcategories=Storage . When you attach a multi-attach volume, some hypervisors require special considerations, such as when you disable caching. Read-only multi-attach volumes are not supported. Live migration of multi-attach volumes is not available. Encryption of multi-attach volumes is not supported. Multi-attach volumes are not supported by the Bare Metal Provisioning service (ironic) virt driver. Multi-attach volumes are supported by only the libvirt virt driver. When a volume is not in use and its status is available , you can retype this volume to be multi-attach capable or retype a multi-attach capable volume to make it incapable of attaching to multiple instances. You cannot retype an attached volume from a multi-attach type to a non-multi-attach type, or from a non-multi-attach type to a multi-attach type. You cannot use multi-attach volumes that have multiple read write attachments as the source or destination volume during an attached volume migration. You cannot attach multi-attach volumes to shelved offloaded instances. Simultaneous API calls, such as create or resize, to instances attached to the same multi-attach volume can fail. For more information, see https://access.redhat.com/solutions/7077470 . Prerequisites You must be a project administrator to create multi-attach volume types. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a multi-attach volume type: Replace <multi-attach_volume_type_name> with the name of your multi-attach volume type. Exit the openstackclient pod: USD exit steps Create a volume from your multi-attach volume type and attach this volume to multiple instances. 3.6. Configuring project-specific default volume types If you create a volume and do not specify a volume type, the Block Storage service (cinder) uses the default volume type. You can use the default_volume_type parameter, when initially configuring the Block Storage service, to define the general default volume type that applies to all projects. By default, this volume type is called __DEFAULT__ . For more information, see Configuring initial Block Storage service defaults in Configuring persistent storage . If your deployment uses project-specific volume types, ensure that you define default volume types for each project. In this case, Block Storage uses the project-specific volume type instead of the general default volume type. The following deployment types need project-specific default volume types: A distributed deployment spanning many availability zones (AZs). Each AZ is in its own project and has its own volume types. A multi-department, corporate deployment. Each department is in its own project and has its own specialized volume type. Prerequisites At least one volume type in each project that will be the project-specific default volume type. For more information, see Creating and configuring a volume type . Block Storage REST API microversion 3.62 or later. Only project administrators can define, clear, or list default volume types for their projects. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Note If the cloudrc file does not exist, then type in the exit command and create this file. For more information, see Creating the cloudrc file . Configure the default volume type for a project: Replace <volume_type> with the name or ID of the required volume type. To find the volume type name and ID, use the volume type list command. Replace <project_id> with the ID of the appropriate project. To find the project ID, use the openstack project list command. (Optional) Remove a default volume type for a project: Replace <project_id> with the ID of the appropriate project. To find the project ID, use the openstack project list command. (Optional) List the default volume type for a project: Replace <project_id> with the ID of the appropriate project. To find the project ID, use the openstack project list command. Exit the openstackclient pod: USD exit 3.7. Creating and configuring an internal project for the Block Storage service (cinder) Some Block Storage features require an internal Block Storage project or tenant, that is also known as the service project, which is called cinder-internal . The Block Storage service uses this project to manage resources that this service creates to ensure that these resources are not counted towards the users' project quota limits. For example: the volumes that are created when images are cached for frequent volume cloning or temporary copies of volumes being migrated. Since this cinder-internal project is also subject to the default project quotas, you must adjust the volumes quota for this project, which is limited to only 10 volumes, by default. You can either set the limit of the volumes quota to be unlimited (-1) or provide a number, to ensure the effective use of your available volume storage, such as 50. You might also need to change the gigabytes quota that limits the maximum size of the created volumes for this project, in gigabytes, which is set to 1000 by default. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a generic project and user, both named cinder-internal , as follows: Modify the maximum number of volumes that the Block Storage service can create for this cinder-internal project: Replace <maxnum> with the maximum number of volumes that the Block Storage service can create for this cinder-internal project. Optional: Modify the maximum total size of all the volumes that the Block Storage service can create for this cinder-internal project: Replace <maxgb> with the maximum total size, in gigabytes, of the volumes that the Block Storage service can create for this cinder-internal project. Exit the openstackclient pod: USD exit 3.8. Migrating a volume between back ends You can migrate volumes between back ends within, and across, availability zones (AZs). In highly customized deployments or in situations in which you must retire a storage system, an administrator can migrate volumes. In both use cases, multiple storage systems either share the same volume_backend_name property, or this property is undefined. Restrictions for migrated volumes The volume cannot be replicated. The destination back end must be different from the current back end of the volume. The existing volume type must be valid for the new back end, which means that the following must be true: The volume type must either not have the volume_backend_name defined, or both Block Storage back ends must be configured with the same volume_backend_name . Both back ends must support the same features configured in the volume type, such as support for thin provisioning, support for thick provisioning, or other feature configurations. Note Moving volumes from one back end to another might require extensive time and resources. For more information, see Restrictions and performance constraints when moving volumes . Prerequisites You must be a project administrator to migrate volumes. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Retrieve the name of the destination back end. This lists the destination back-end names, which use this syntax: host@volume_backend_name#pool . For example: Migrate a volume from one back end to another. Replace <volume> with the volume ID or name of the originating back end. Replace <new_back_end_host> with the host of the destination back end. Exit the openstackclient pod: USD exit 3.8.1. Restrictions and performance constraints when moving volumes Red Hat supports moving volumes between back ends within and across availability zones (AZs), with the following restrictions: Volumes must have either have a status of available or in-use to be moved. Support for in-use volumes is driver dependent. Volumes cannot have snapshots. Volumes cannot belong to a group. Moving available volumes You can move available volumes between all back ends, but performance depends on the back ends that you use. Many back ends support assisted migration. For more information about back-end support for assisted migration, contact the vendor. With assisted migration, the back end optimizes the movement of the data from the source back end to the destination back end, but both back ends must be from the same vendor. Note Red Hat supports back-end assisted migrations only with multi-pool back ends or when you use the cinder migrate operation for single-pool back ends, such as RBD. When assisted migrations between back ends are not possible, the Block Storage service performs a generic volume migration. Generic volume migration requires volumes on both back ends to be connected before the Block Storage (cinder) service moves data from the source volume to the Controller node and from the Controller node to the destination volume. The Block Storage service seamlessly performs the process regardless of the type of storage from the source and destination back ends. Important Ensure that you have adequate bandwidth before you perform a generic volume migration. The duration of a generic volume migration is directly proportional to the size of the volume, which makes the operation slower than assisted migration. Moving in-use volumes In-use multi-attach volumes cannot be moved while they are attached to more than one nova instance. Non block devices are not supported, which limits storage protocols on the target back end to be iSCSI, Fibre Channel (FC), and RBD. There is no optimized or assisted option for moving in-use volumes. When you move in-use volumes, the Compute service (nova) must use the hypervisor to transfer data from a volume in the source back end to a volume in the destination back end. This requires coordination with the hypervisor that runs the instance where the volume is in use. The Block Storage service (cinder) and the Compute service work together to perform this operation. The Compute service manages most of the work, because the data is copied from one volume to another through the Compute node. Important Ensure that you have adequate bandwidth before you move in-use volumes. The duration of this operation is directly proportional to the size of the volume, which makes the operation slower than assisted migration. 3.9. Manage or unmanage volumes and their snapshots You can add volumes to or remove volumes from the Block Storage volume service ( cinder-volume ) by using the cinder manage and cinder unmanage commands. Important You cannot manage or unmanage encrypted volumes. Typically the Block Storage volume service manages the volumes that it creates, so that it can, for instance, list, attach, and delete these volumes. You can use the cinder unmanage command to remove a volume from the Block Storage volume service, so that it will no longer list, attach or delete this volume. Note You cannot unmanage a volume if it has snapshots. In this case, you must unmanage all of the snapshots before you unmanage a volume, by using the cinder snapshot-unmanage command. You can use the cinder manage command to add a volume to the Block Storage volume service, so that it can, for instance, list, attach, and delete this volume. Then you can add the snapshots of this volume, by using the cinder snapshot-manage command. You can use the cinder manageable-list command to determine whether there are volumes in the storage arrays of the Block Storage volume service that are not being managed. The volumes in this list are typically volumes that users have unmanaged or which have been created manually on a storage array without using the Block Storage volume service. Similarly, the cinder snapshot-manageable-list command lists all the manageable snapshots. The syntax of the cinder manage`and `cinder snapshot-manage commands is back end specific, because the properties required to identify the volume are back end specific. Most back ends support either or both of the source-name and source-id properties, others require additional properties to be set. Some back ends can list which volumes are manageable and what parameters need to be passed. For those back ends that do not, please refer to the vendor documentation. The syntax of the cinder unmanage and cinder snapshot-unmanage commands is not back end specific, you must specify the required volume name or volume ID. Usage scenarios You can use these Block Storage commands when upgrading your Red Hat OpenStack Services on OpenShift (RHOSO) deployment in parallel, by keeping your existing RHOSO version running while you deploy the new version of RHOSO. In this scenario, you must unmanage all of the snapshots and then unmanage the volume to remove a volume from your existing RHOSO and then you must manage this volume and all of its snapshots to add this volume and its snapshots to the new version of RHOSO. In this way, you can move your volumes and their snaphots to your new RHOSO version while running your existing cloud. Another possible scenario is if you had a bare metal machine using a volume in one of your storage arrays. Then you decide to move the software running on this machine into the cloud but you still want to use this volume. In this scenario, you use the cinder manage command to add this volume to the Block Storage volume service. 3.10. Creating the cloudrc file The Block Storage service (cinder) uses both openstack and cinder client commands, within the openstackclient pod. When cinder client commands are required, then the cloudrc file must be created on the openstackclient pod to enable their usage. Once created, the cloudrc file persists for the lifetime of the openstackclient pod. If the cloudrc does not exist, for instance it cannot be sourced, then use the following command to create it:
|
[
"oc rsh -n openstack openstackclient source ./cloudrc",
"openstack project list",
"openstack quota show <project>",
"openstack quota show c2c1da89ed1648fc8b4f35a045f8d34c +-----------------------+-------+ | Resource | Limit | +-----------------------+-------+ | volumes | 10 | | snapshots | 10 | | gigabytes | 1000 | | volumes___DEFAULT__ | -1 | | gigabytes___DEFAULT__ | -1 | | snapshots___DEFAULT__ | -1 | | per-volume-gigabytes | -1 | +-----------------------+-------+",
"openstack quota set --gigabytes <totalgb> <project>",
"openstack quota set --volumes <maxvolumes> <project>",
"openstack quota set --per-volume-gigabytes <maxsize> <project>",
"openstack quota set --snapshots <maxsnapshots> <project>",
"cinder quota-usage <project_id>",
"cinder quota-usage c2c1da89ed1648fc8b4f35a045f8d34c +-----------------------+--------+----------+-------+-----------+ | Type | In_use | Reserved | Limit | Allocated | +-----------------------+--------+----------+-------+-----------+ | gigabytes | 750 | 0 | 1500 | | | gigabytes___DEFAULT__ | 0 | 0 | -1 | | | groups | 0 | 0 | 10 | | | per_volume_gigabytes | 0 | 0 | 300 | | | snapshots | 1 | 0 | 7 | | | snapshots___DEFAULT__ | 0 | 0 | -1 | | | volumes | 5 | 0 | 15 | | | volumes___DEFAULT__ | 0 | 0 | -1 | | +-----------------------+--------+----------+-------+-----------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume type create --property <key>=<value> <volume_type_name>",
"openstack volume type create --property thin_provisioning=true --property compression=false MyVolumeType +-------------+-----------------------------------------------+ | Field | Value | +-------------+-----------------------------------------------+ | description | None | | id | c244205c-fb22-4076-9780-edebe55889bc | | is_public | True | | name | MyVolumeType | | properties | compression='false', thin_provisioning='true' | +-------------+-----------------------------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume type create --private <volume_type_name>",
"openstack volume type create --private MyVolType2 +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | description | None | | id | 20659377-935d-4253-8b82-ae12c4710288 | | is_public | False | | name | MyVolType2 | +-------------+--------------------------------------+",
"openstack volume type list",
"+--------------------------------------+-------------+-----------+ | ID | Name | Is Public | +--------------------------------------+-------------+-----------+ | 271f9b90-8186-4143-aaed-09e91820d852 | MyVolType3 | True | | 20659377-935d-4253-8b82-ae12c4710288 | MyVolType2 | False | | 28b5ca7f-0eb0-43ce-bf0a-898fab92d43b | MyVolType1 | True | | 0875116b-27d6-493c-aca6-f62de4d58614 | __DEFAULT__ | True | +--------------------------------------+-------------+-----------+",
"openstack project list",
"openstack volume type set <type_id> --project <project_id>",
"openstack volume type show <type_id>",
"openstack volume type unset <type_id> --project <project_id>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume service list",
"+------------------+---------------------------+------+--------- | Binary | Host | Zone | Status +------------------+---------------------------+------+--------- | cinder-scheduler | cinder-scheduler-0 | nova | enabled | cinder-backup | cinder-backup-0 | nova | enabled | cinder-volume | cinder-volume-lvm-iscsi-0@lvm | nova | enabled +------------------+---------------------------+------+---------",
"openstack volume backend capability show <volsvchost>",
"openstack volume backend capability show cinder-volume-lvm-iscsi-0 +-------------------+---------------------+---------+-------------------------+ | Title | Key | Type | Description | +-------------------+---------------------+---------+-------------------------+ | Thin Provisioning | thin_provisioning | boolean | Sets thin provisioning. | | Compression | compression | boolean | Enables compression. | | QoS | qos | boolean | Enables QoS. | | Replication | replication_enabled | boolean | Enables replication. | +-------------------+---------------------+---------+-------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume type set --property <key>=<value> <existing_volume_type_name>",
"openstack volume type set --property volume_backend_name=lvm MyVolumeType",
"openstack volume type show <volume_type_name>",
"openstack volume type show MyVolumeType +--------------------+--------------------------------------------------------------------------+ | Field | Value | +--------------------+--------------------------------------------------------------------------+ | access_project_ids | None | | description | None | | id | c244205c-fb22-4076-9780-edebe55889bc | | is_public | True | | name | MyVolumeType | | properties | compression='false', thin_provisioning='true', volume_backend_name='lvm' | | qos_specs_id | None | +--------------------+--------------------------------------------------------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume qos create [--consumer <qos_spec_consumer>] --property <key>=<value> <qos_spec_name>",
"openstack volume qos create --property read_iops_sec=5000 --property write_iops_sec=7000 --consumer front-end myqoslimits +------------+---------------------------------------------+ | Field | Value | +------------+---------------------------------------------+ | consumer | front-end | | id | 9fc9a481-28e9-49b8-84eb-f0a476cc89a5 | | name | myqoslimits | | properties | read_iops_sec='5000', write_iops_sec='7000' | +------------+---------------------------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume qos set --property <key>=<value> <qos_spec_name>",
"openstack volume qos list",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume qos associate <qos_spec_name> <volume_type>",
"openstack volume qos list",
"+--------------------------------------+--------------+-----------+--------------+-------------------------------------------------------------------+ | ID | Name | Consumer | Associations | Properties | +--------------------------------------+--------------+-----------+--------------+-------------------------------------------------------------------+ | 9fc9a481-28e9-49b8-84eb-f0a476cc89a5 | myqoslimits | both | MyVolType | read_iops_sec='6500', size_iops_sec='4096', write_iops_sec='7500' | +--------------------------------------+--------------+-----------+--------------+-------------------------------------------------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume qos disassociate <qos_spec_name> --volume-type <volume_type>",
"openstack volume qos disassociate <qos_spec_name> --all",
"openstack volume qos list",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume type create --encryption-provider <provider_name> --encryption-cipher <cipher_name> --encryption-key-size <key_size> --encryption-control-location <control_location> <encrypted_volume_type_name>",
"openstack volume type create --encryption-provider luks --encryption-cipher aes-xts-plain64 --encryption-key-size 256 --encryption-control-location front-end MyEncryptedVolType",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume type create --property multiattach=\"<is> True\" <multi-attach_volume_type_name>",
"exit",
"oc rsh -n openstack openstackclient source ./cloudrc",
"cinder --os-volume-api-version 3.62 default-type-set <volume_type> <project_id>",
"cinder --os-volume-api-version 3.62 default-type-unset <project_id>",
"cinder --os-volume-api-version 3.62 default-type-list --project <project_id>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack project create --enable --description \"Block Storage Internal Project\" cinder-internal +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Block Storage Internal Project | | domain_id | default | | enabled | True | | id | 670615550a5d4126b22953f76f380397 | | is_domain | False | | name | cinder-internal | | options | {} | | parent_id | default | | tags | [] | +-------------+----------------------------------+ openstack user create --project cinder-internal cinder-internal +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | default_project_id | 670615550a5d4126b22953f76f380397 | | domain_id | default | | enabled | True | | id | a6b3576f345346a590f2c8292f5cbd60 | | name | cinder-internal | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+",
"openstack quota set --volumes <maxnum> cinder-internal",
"openstack quota set --gigabytes <maxgb> cinder-internal",
"exit",
"oc rsh -n openstack openstackclient",
"openstack volume backend pool list",
"+--------------------------------+ | Name | +--------------------------------+ | cinder-volume-lvm-0@lvm#lvm | | cinder-volume-lvm2-0@lvm2#lvm2 | +--------------------------------+",
"openstack volume migrate --host <new_back_end_host> <volume>",
"exit",
"oc rsh openstackclient bash -c ' cat .config/openstack/* | while read key val; do case USDkey in \"auth_url:\") var=OS_AUTH_URL ;; \"username:\") var=OS_USERNAME ;; \"password:\") var=OS_PASSWORD ;; \"project_name:\") var=OS_PROJECT_NAME ;; \"project_domain_name:\") var=OS_PROJECT_DOMAIN_NAME ;; \"user_domain_name:\") var=OS_USER_DOMAIN_NAME ;; *) var=\"\" ;; esac [ -z \"USDvar\" ] || echo \"export USD{var}=USD{val}\" done > cloudrc '"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/customizing_persistent_storage/assembly_cinder-customizing-the-block-storage-service_osp
|
Chapter 23. Configuring a service and KIE Server to send Kafka messages when a transaction is committed
|
Chapter 23. Configuring a service and KIE Server to send Kafka messages when a transaction is committed You can configure KIE Server with an emitter that sends Kafka messages automatically. In this case, KIE Server sends a message every time a task, process, case, or variable is created, updated, or deleted. The Kafka message contains information about the modified object. KIE Server sends the message when it commits the transaction with the change. You can use this functionality with any business process or case. You do not need to change anything in the process design. This configuration is also available if you run your process service using SpringBoot. By default, KIE Server publishes the messages in the following topics: jbpm-processes-events for messages about completed processes jbpm-tasks-events for messages about completed tasks jbpm-cases-events for messages about completed cases You can configure the topic names. The published messages comply with the CloudEvents specification version 1.0. Each message contains the following fields: id : The unique identifier of the event type : The type of the event (process, task, or case) source : The event source as a URI time : The timestamp of the event, by default in the RFC3339 format data : Information about the process, case, or task, presented in a JSON format Prerequisites A KIE Server instance is installed. Procedure To send Kafka messages automatically, complete one of the following tasks: If you deployed KIE Server on Red Hat JBoss EAP or another application server, complete the following steps: Download the rhpam-7.13.5-maven-repository.zip product deliverable file from the Software Downloads page of the Red Hat Customer Portal. Extract the contents of the file. Copy the maven-repository/org/jbpm/jbpm-event-emitters-kafka/7.67.0.Final-redhat-00024/jbpm-event-emitters-kafka-7.67.0.Final-redhat-00024.jar file into the deployments/kie-server.war/WEB-INF/lib subdirectory of the application server. If you deployed the application using SpringBoot, add the following lines to the <dependencies> list in the pom.xml file of your service: <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-event-emitters-kafka</artifactId> <version>USD{version.org.kie}</version> </dependency> Configure any of the following KIE Server system properties as necessary: Table 23.1. KIE Server system properties related to the Kafka emitter Property Description Default value org.kie.jbpm.event.emitters.kafka.bootstrap.servers : The host and port of the Kafka broker. You can use a comma-separated list of multiple host:port pairs. localhost:9092 org.kie.jbpm.event.emitters.kafka.date_format : The timestamp format for the time field of the messages. yyyy-MM-dd'T'HH:mm:ss.SSSZ org.kie.jbpm.event.emitters.kafka.topic.processes The topic name for process event messages. jbpm-processes-events org.kie.jbpm.event.emitters.kafka.topic.cases The topic name for case event messages. jbpm-cases-events org.kie.jbpm.event.emitters.kafka.topic.tasks The topic name for task event messages. jbpm-processes-tasks org.kie.jbpm.event.emitters.kafka.client.id An identifier string to pass to the server when making requests. The server uses this string for logging. org.kie.jbpm.event.emitters.kafka. property_name Set any Red Hat AMQ Streams consumer or producer property by using this prefix. For example, to set a value for the buffer.memory producer property, set the org.kie.jbpm.event.emitters.kafka.buffer.memory KIE Server system property. This setting applies when KIE Server is configured with an emitter to send Kafka messages automatically when completing transactions. For a list of Red Hat AMQ Streams consumer and producer properties, see the Consumer configuration parameters and Producer configuration parameters appendixes in Using AMQ Streams on RHEL . org.kie.jbpm.event.emitters.eagerInit By default, KIE Server initializes the Kafka emitter only when sending a message. If you want to initialize the Kafka emitter when KIE Server starts, set this property to true . When KIE Server initializes the Kafka emitter, it logs any errors in Kafka emitter configuration and any Kafka communication errors. If you set the org.kie.jbpm.event.emitters.eagerInit property to true , any such errors appear in the log output when KIE Server starts. false
|
[
"<dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-event-emitters-kafka</artifactId> <version>USD{version.org.kie}</version> </dependency>"
] |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/integrating_red_hat_process_automation_manager_with_other_products_and_components/kieserver-kafka-emit-proc_integrating-amq-streams
|
25.5. Storing a Service Secret in a Vault
|
25.5. Storing a Service Secret in a Vault This section shows how an administrator can use vaults to securely store a service secret in a centralized location. The service secret is encrypted with the service public key. The service then retrieves the secret using its private key on any machine in the domain. Only the service and the administrator are allowed to access the secret. This section includes these procedures: Section 25.5.1, "Creating a User Vault to Store a Service Password" Section 25.5.2, "Provisioning a Service Password from a User Vault to Service Instances" Section 25.5.3, "Retrieving a Service Password for a Service Instance" Section 25.5.4, "Changing Service Vault Password" In the procedures: admin is the administrator who manages the service password http_password is the name of the private user vault created by the administrator password.txt is the file containing the service password password_vault is the vault created for the service HTTP/server.example.com is the service whose password is being archived service-public.pem is the service public key used to encrypt the password stored in password_vault 25.5.1. Creating a User Vault to Store a Service Password Create an administrator-owned user vault, and use it to store the service password. The vault type is standard, which ensures the administrator is not required to authenticate when accessing the contents of the vault. Log in as the administrator: Create a standard user vault: Archive the service password into the vault: Warning After archiving the password into the vault, delete password.txt from your system. 25.5.2. Provisioning a Service Password from a User Vault to Service Instances Using an asymmetric vault created for the service, provision the service password to a service instance. Log in as the administrator: Obtain the public key of the service instance. For example, using the openssl utility: Generate the service-private.pem private key. Generate the service-public.pem public key based on the private key. Create an asymmetric vault as the service instance vault, and provide the public key: The password archived into the vault will be protected with the key. Retrieve the service password from the administrator's private vault, and then archive it into the new service vault: This encrypts the password with the service instance public key. Warning After archiving the password into the vault, delete password.txt from your system. Repeat these steps for every service instance that requires the password. Create a new asymmetric vault for each service instance. 25.5.3. Retrieving a Service Password for a Service Instance A service instance can retrieve the service vault password using the locally-stored service private key. Log in as the administrator: Obtain a Kerberos ticket for the service: Retrieve the service vault password: 25.5.4. Changing Service Vault Password If a service instance is compromised, isolate it by changing the service vault password and then re-provisioning the new password to non-compromised service instances only. Archive the new password in the administrator's user vault: This overwrites the current password stored in the vault. Re-provision the new password to each service instance excluding the compromised instance. Retrieve the new password from the administrator's vault: Archive the new password into the service instance vault: Warning After archiving the password into the vault, delete password.txt from your system.
|
[
"kinit admin",
"ipa vault-add http_password --type standard --------------------------- Added vault \"http_password\" --------------------------- Vault name: http_password Type: standard Owner users: admin Vault user: admin",
"ipa vault-archive http_password --in password.txt ---------------------------------------- Archived data into vault \"http_password\" ----------------------------------------",
"kinit admin",
"openssl genrsa -out service-private.pem 2048 Generating RSA private key, 2048 bit long modulus .+++ ...........................................+++ e is 65537 (0x10001)",
"openssl rsa -in service-private.pem -out service-public.pem -pubout writing RSA key",
"ipa vault-add password_vault --service HTTP/server.example.com --type asymmetric --public-key-file service-public.pem ---------------------------- Added vault \"password_vault\" ---------------------------- Vault name: password_vault Type: asymmetric Public key: LS0tLS1C...S0tLS0tCg== Owner users: admin Vault service: HTTP/[email protected]",
"ipa vault-retrieve http_password --out password.txt ----------------------------------------- Retrieved data from vault \"http_password\" -----------------------------------------",
"ipa vault-archive password_vault --service HTTP/server.example.com --in password.txt ----------------------------------- Archived data into vault \"password_vault\" -----------------------------------",
"kinit admin",
"kinit HTTP/server.example.com -k -t /etc/httpd/conf/ipa.keytab",
"ipa vault-retrieve password_vault --service HTTP/server.example.com --private-key-file service-private.pem --out password.txt ------------------------------------ Retrieved data from vault \"password_vault\" ------------------------------------",
"ipa vault-archive http_password --in new_password.txt ---------------------------------------- Archived data into vault \"http_password\" ----------------------------------------",
"ipa vault-retrieve http_password --out password.txt ----------------------------------------- Retrieved data from vault \"http_password\" -----------------------------------------",
"ipa vault-archive password_vault --service HTTP/server.example.com --in password.txt ----------------------------------- Archived data into vault \"password_vault\" -----------------------------------"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/vault-service
|
Chapter 4. Managing users on the Ceph dashboard
|
Chapter 4. Managing users on the Ceph dashboard As a storage administrator, you can create, edit, and delete users with specific roles on the Red Hat Ceph Storage dashboard. Role-based access control is given to each user based on their roles and the requirements. You can also create, edit, import, export, and delete Ceph client authentication keys on the dashboard. Once you create the authentication keys, you can rotate keys using command-line interface (CLI). Key rotation meets the current industry and security compliance requirements. This section covers the following administrative tasks: Creating users on the Ceph dashboard . Editing users on the Ceph dashboard . Deleting users on the Ceph dashboard . User capabilities Access capabilities Creating user capabilities Editing user capabilities Importing user capabilities Exporting user capabilities Deleting user capabilities 4.1. Creating users on the Ceph dashboard You can create users on the Red Hat Ceph Storage dashboard with adequate roles and permissions based on their roles. For example, if you want the user to manage Ceph object gateway operations, then you can give rgw-manager role to the user. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. Note The Red Hat Ceph Storage Dashboard does not support any email verification when changing a users password. This behavior is intentional, because the Dashboard supports Single Sign-On (SSO) and this feature can be delegated to the SSO provider. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Users tab, click Create . In the Create User window, set the Username and other parameters including the roles, and then click Create User . You get a notification that the user was created successfully. Additional Resources See the Creating roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. See the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 4.2. Editing users on the Ceph dashboard You can edit the users on the Red Hat Ceph Storage dashboard. You can modify the user's password and roles based on the requirements. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. User created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . To edit the user, click the row. On Users tab, select Edit from the Edit drop-down menu. In the Edit User window, edit parameters like password and roles, and then click Edit User . Note If you want to disable any user's access to the Ceph dashboard, you can uncheck Enabled option in the Edit User window. You get a notification that the user was created successfully. Additional Resources See the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 4.3. Deleting users on the Ceph dashboard You can delete users on the Ceph dashboard. Some users might be removed from the system. The access to such users can be deleted from the Ceph dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. User created on the dashboard. Procedure Log in to the Dashboard. Click the Dashboard Settings icon and then click User management . On Users tab, click the user you want to delete. select Delete from the Edit drop-down menu. In the Delete User notification, select Yes, I am sure and click Delete User . Additional Resources See the Creating users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide for more details. 4.4. User capabilities Ceph stores data RADOS objects within pools irrespective of the Ceph client used. Ceph users must have access to a given pool to read and write data, and must have executable permissions to use Ceph administrative's commands. Creating users allows you to control their access to your Red Hat Ceph Storage cluster, its pools, and the data within the pools. Ceph has a concept of type of user which is always client . You need to define the user with the TYPE . ID where ID is the user ID, for example, client.admin . This user typing is because the Cephx protocol is used not only by clients but also non-clients, such as Ceph Monitors, OSDs, and Metadata Servers. Distinguishing the user type helps to distinguish between client users and other users. This distinction streamlines access control, user monitoring, and traceability. 4.4.1. Capabilities Ceph uses capabilities (caps) to describe the permissions granted to an authenticated user to exercise the functionality of the monitors, OSDs, and metadata servers. The capabilities restrict access to data within a pool, a namespace within a pool, or a set of pools based on their applications tags. A Ceph administrative user specifies the capabilities of a user when creating or updating the user. You can set the capabilities to monitors, managers, OSDs, and metadata servers. The Ceph Monitor capabilities include r , w , and x access settings. These can be applied in aggregate from pre-defined profiles with profile NAME . The OSD capabilities include r , w , x , class-read , and class-write access settings. These can be applied in aggregate from pre-defined profiles with profile NAME . The Ceph Manager capabilities include r , w , and x access settings. These can be applied in aggregate from pre-defined profiles with profile NAME . For administrators, the metadata server (MDS) capabilities include allow * . Note The Ceph Object Gateway daemon ( radosgw ) is a client of the Red Hat Ceph Storage cluster and is not represented as a Ceph storage cluster daemon type. Additional Resources See Access capabilities for more details. 4.5. Access capabilities This section describes the different access or entity capabilities that can be given to a Ceph user or a Ceph client such as Block Device, Object Storage, File System, and native API. Additionally, you can describe the capability profiles while assigning roles to clients. allow , Description Precedes access settings for a daemon. Implies rw for MDS only r , Description Gives the user read access. Required with monitors to retrieve the CRUSH map. w , Description Gives the user write access to objects. x , Description Gives the user the capability to call class methods, that is, both read and write , and to conduct auth operations on monitors. class-read , Description Gives the user the capability to call class read methods. Subset of x . class-write , Description Gives the user the capability to call class write methods. Subset of x . *, all , Description Gives the user read , write , and execute permissions for a particular daemon or a pool, as well as the ability to execute admin commands. The following entries describe valid capability profile: profile osd , Description This is applicable to Ceph Monitor only. Gives a user permissions to connect as an OSD to other OSDs or monitors. Conferred on OSDs to enable OSDs to handle replication heartbeat traffic and status reporting. profile mds , Description This is applicable to Ceph Monitor only. Gives a user permissions to connect as an MDS to other MDSs or monitors. profile bootstrap-osd , Description This is applicable to Ceph Monitor only. Gives a user permissions to bootstrap an OSD. Conferred on deployment tools, such as ceph-volume and cephadm , so that they have permissions to add keys when bootstrapping an OSD. profile bootstrap-mds , Description This is applicable to Ceph Monitor only. Gives a user permissions to bootstrap a metadata server. Conferred on deployment tools, such as cephadm , so that they have permissions to add keys when bootstrapping a metadata server. profile bootstrap-rbd , Description This is applicable to Ceph Monitor only. Gives a user permissions to bootstrap an RBD user. Conferred on deployment tools, such as cephadm , so that they have permissions to add keys when bootstrapping an RBD user. profile bootstrap-rbd-mirror , Description This is applicable to Ceph Monitor only. Gives a user permissions to bootstrap an rbd-mirror daemon user. Conferred on deployment tools, such as cephadm , so that they have permissions to add keys when bootstrapping an rbd-mirror daemon. profile rbd , Description This is applicable to Ceph Monitor, Ceph Manager, and Ceph OSDs. Gives a user permissions to manipulate RBD images. When used as a Monitor cap, it provides the user with the minimal privileges required by an RBD client application; such privileges include the ability to blocklist other client users. When used as an OSD cap, it provides an RBD client application with read-write access to the specified pool. The Manager cap supports optional pool and namespace keyword arguments. profile rbd-mirror , Description This is applicable to Ceph Monitor only. Gives a user permissions to manipulate RBD images and retrieve RBD mirroring config-key secrets. It provides the minimal privileges required for the user to manipulate the rbd-mirror daemon. profile rbd-read-only , Description This is applicable to Ceph Monitor and Ceph OSDS. Gives a user read-only permissions to RBD images. The Manager cap supports optional pool and namespace keyword arguments. profile simple-rados-client , Description This is applicable to Ceph Monitor only. Gives a user read-only permissions for monitor, OSD, and PG data. Intended for use by direct librados client applications. profile simple-rados-client-with-blocklist , Description This is applicable to Ceph Monitor only. Gives a user read-only permissions for monitor, OSD, and PG data. Intended for use by direct librados client applications. Also includes permissions to add blocklist entries to build high-availability (HA) applications. profile fs-client , Description This is applicable to Ceph Monitor only. Gives a user read-only permissions for monitor, OSD, PG, and MDS data. Intended for CephFS clients. profile role-definer , Description This is applicable to Ceph Monitor and Auth. Gives user all permissions for the auth subsystem, read-only access to monitors, and nothing else. Useful for automation tools. WARNING: Do not assign this unless you really, know what you are doing, as the security ramifications are substantial and pervasive. profile crash , Description This is applicable to Ceph Monitor and Ceph Manager. Gives a user read-only access to monitors. Used in conjunction with the manager crash module to upload daemon crash dumps into monitor storage for later analysis. Additional Resources See User capabilities_ for more details. 4.6. Creating user capabilities Create role-based access users with different capabilities on the Ceph dashboard. For details on different user capabilities, see User capabilities and Access capabilities Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. Procedure From the dashboard navigation, go to Administration->Ceph Users . Click Create . In the Create User form, provide the following details: User entity : Enter as TYPE . ID . Entity : This can be mon , mgr , osd , or mds . Entity Capabilities : Enter the capabilities that you can to provide to the user. For example, 'allow *' and profile crash are some of the capabilities that can be assigned to the client. Note You can add more entities to the user, based on the requirement. Click Create User . A notification displays that the user is created successfully. 4.7. Editing user capabilities Edit the roles of users or clients on the dashboard. For details on different user capabilities, see User capabilities and Access capabilities Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. Procedure From the dashboard navigation, go to Administration->Ceph Users . Select the user whose roles you want to edit. Click Edit . In the Edit User form, edit the Entity and Entity Capabilities , as needed. Note You can add more entities to the user based on the requirement. Click Edit User . A notification displays that the user is successfully edited. 4.8. Importing user capabilities Import the roles of users or clients from the the local host to the client on the dashboard. For details on different user capabilities, see User capabilities and Access capabilities Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. Procedure Create a keyring file on the local host: Example From the dashboard navigation, go to Administration->Ceph Users . Select the user whose roles you want to export. Select Edit->Import . In the Import User form, click Choose File . Browse to the file on your local host and select. Click Import User . A notification displays that the keys are successfully imported. 4.9. Exporting user capabilities Export the roles of the users or clients from the dashboard to a the local host. For details on different user capabilities, see User capabilities and Access capabilities Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. Procedure From the dashboard navigation, go to Administration->Ceph Users . Select the user whose roles you want to export. Select Export from the action drop-down. From the Ceph user export data dialog, click Copy to Clipboard . A notification displays that the keys are successfully copied. On your local system, create a keyring file and paste the keys: Example Click Close . 4.10. Deleting user capabilities Delete the roles of users or clients on the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Admin-level access to the dashboard. Procedure From the dashboard navigation, go to Administration->Ceph Users . Select the user that you want to delete and select Delete from the action drop-down. In the Delete user dialog, select Yes, I am sure. . Click Delete user . A notification displays that the user is deleted successfully.
|
[
"[localhost:~]USD cat import.keyring [client.test11] key = AQD9S29kmjgJFxAAkvhFar6Af3AWKDY2DsULRg== caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow r\"",
"[localhost:~]USD cat exported.keyring [client.test11] key = AQD9S29kmjgJFxAAkvhFar6Af3AWKDY2DsULRg== caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow r\""
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/dashboard_guide/management-of-users-on-the-ceph-dashboard
|
16.6.2. Common pam_timestamp Directives
|
16.6.2. Common pam_timestamp Directives The pam_timestamp.so module accepts several directives. Below are the two most commonly used options: timestamp_timeout - Specifies the number of seconds the during which the timestamp file is valid (in seconds). The default value is 300 seconds (five minutes). timestampdir - Specifies the directory in which the timestamp file is stored. The default value is /var/run/sudo/ . For more information about controlling the pam_timestamp.so module, refer to Section 16.8.1, "Installed Documentation" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-pam-timestamp-directives
|
8.3. Kernel Same-page Merging (KSM)
|
8.3. Kernel Same-page Merging (KSM) Kernel same-page Merging (KSM), used by the KVM hypervisor, allows KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication. The concept of shared memory is common in modern operating systems. For example, when a program is first started, it shares all of its memory with the parent program. When either the child or parent program tries to modify this memory, the kernel allocates a new memory region, copies the original contents and allows the program to modify this new region. This is known as copy on write. KSM is a Linux feature which uses this concept in reverse. KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple identical memory pages to a single page. This page is then marked copy on write. If the contents of the page is modified by a guest virtual machine, a new page is created for that guest. This is useful for virtualization with KVM. When a guest virtual machine is started, it only inherits the memory from the host qemu-kvm process. Once the guest is running, the contents of the guest operating system image can be shared when guests are running the same operating system or applications. KSM allows KVM to request that these identical guest memory regions be shared. KSM provides enhanced memory speed and utilization. With KSM, common process data is stored in cache or in main memory. This reduces cache misses for the KVM guests, which can improve performance for some applications and operating systems. Secondly, sharing memory reduces the overall memory usage of guests, which allows for higher densities and greater utilization of resources. Note In Red Hat Enterprise Linux 7, KSM is NUMA aware. This allows it to take NUMA locality into account while coalescing pages, thus preventing performance drops related to pages being moved to a remote node. Red Hat recommends avoiding cross-node memory merging when KSM is in use. If KSM is in use, change the /sys/kernel/mm/ksm/merge_across_nodes tunable to 0 to avoid merging pages across NUMA nodes. This can be done with the virsh node-memory-tune --shm-merge-across-nodes 0 command. Kernel memory accounting statistics can eventually contradict each other after large amounts of cross-node merging. As such, numad can become confused after the KSM daemon merges large amounts of memory. If your system has a large amount of free memory, you may achieve higher performance by turning off and disabling the KSM daemon. See Chapter 9, NUMA " for more information on NUMA. Important Ensure the swap size is sufficient for the committed RAM even without taking KSM into account. KSM reduces the RAM usage of identical or similar guests. Overcommitting guests with KSM without sufficient swap space may be possible, but is not recommended because guest virtual machine memory use can result in pages becoming unshared. Red Hat Enterprise Linux uses two separate methods for controlling KSM: The ksm service starts and stops the KSM kernel thread. The ksmtuned service controls and tunes the ksm service, dynamically managing same-page merging. ksmtuned starts the ksm service and stops the ksm service if memory sharing is not necessary. When new guests are created or destroyed, ksmtuned must be instructed with the retune parameter to run. Both of these services are controlled with the standard service management tools. Note KSM is off by default on Red Hat Enterprise Linux 6.7. 8.3.1. The KSM Service The ksm service is included in the qemu-kvm package. When the ksm service is not started, Kernel same-page merging (KSM) shares only 2000 pages. This default value provides limited memory-saving benefits. When the ksm service is started, KSM will share up to half of the host system's main memory. Start the ksm service to enable KSM to share more memory. The ksm service can be added to the default startup sequence. Make the ksm service persistent with the systemctl command. 8.3.2. The KSM Tuning Service The ksmtuned service fine-tunes the kernel same-page merging (KSM) configuration by looping and adjusting ksm . In addition, the ksmtuned service is notified by libvirt when a guest virtual machine is created or destroyed. The ksmtuned service has no options. The ksmtuned service can be tuned with the retune parameter, which instructs ksmtuned to run tuning functions manually. The /etc/ksmtuned.conf file is the configuration file for the ksmtuned service. The file output below is the default ksmtuned.conf file: Within the /etc/ksmtuned.conf file, npages sets how many pages ksm will scan before the ksmd daemon becomes inactive. This value will also be set in the /sys/kernel/mm/ksm/pages_to_scan file. The KSM_THRES_CONST value represents the amount of available memory used as a threshold to activate ksm . ksmd is activated if either of the following occurs: The amount of free memory drops below the threshold, set in KSM_THRES_CONST . The amount of committed memory plus the threshold, KSM_THRES_CONST , exceeds the total amount of memory. 8.3.3. KSM Variables and Monitoring Kernel same-page merging (KSM) stores monitoring data in the /sys/kernel/mm/ksm/ directory. Files in this directory are updated by the kernel and are an accurate record of KSM usage and statistics. The variables in the list below are also configurable variables in the /etc/ksmtuned.conf file, as noted above. Files in /sys/kernel/mm/ksm/ : full_scans Full scans run. merge_across_nodes Whether pages from different NUMA nodes can be merged. pages_shared Total pages shared. pages_sharing Pages currently shared. pages_to_scan Pages not scanned. pages_unshared Pages no longer shared. pages_volatile Number of volatile pages. run Whether the KSM process is running. sleep_millisecs Sleep milliseconds. These variables can be manually tuned using the virsh node-memory-tune command. For example, the following specifies the number of pages to scan before the shared memory service goes to sleep: KSM tuning activity is stored in the /var/log/ksmtuned log file if the DEBUG=1 line is added to the /etc/ksmtuned.conf file. The log file location can be changed with the LOGFILE parameter. Changing the log file location is not advised and may require special configuration of SELinux settings. 8.3.4. Deactivating KSM Kernel same-page merging (KSM) has a performance overhead which may be too large for certain environments or host systems. KSM may also introduce side channels that could be potentially used to leak information across guests. If this is a concern, KSM can be disabled on per-guest basis. KSM can be deactivated by stopping the ksmtuned and the ksm services. However, this action does not persist after restarting. To deactivate KSM, run the following in a terminal as root: Stopping the ksmtuned and the ksm deactivates KSM, but this action does not persist after restarting. Persistently deactivate KSM with the systemctl commands: When KSM is disabled, any memory pages that were shared prior to deactivating KSM are still shared. To delete all of the PageKSM in the system, use the following command: After this is performed, the khugepaged daemon can rebuild transparent hugepages on the KVM guest physical memory. Using # echo 0 >/sys/kernel/mm/ksm/run stops KSM, but does not unshare all the previously created KSM pages (this is the same as the # systemctl stop ksmtuned command).
|
[
"systemctl start ksm Starting ksm: [ OK ]",
"systemctl enable ksm",
"systemctl start ksmtuned Starting ksmtuned: [ OK ]",
"Configuration file for ksmtuned. How long ksmtuned should sleep between tuning adjustments KSM_MONITOR_INTERVAL=60 Millisecond sleep between ksm scans for 16Gb server. Smaller servers sleep more, bigger sleep less. KSM_SLEEP_MSEC=10 KSM_NPAGES_BOOST - is added to the `npages` value, when `free memory` is less than `thres`. KSM_NPAGES_BOOST=300 KSM_NPAGES_DECAY - is the value given is subtracted to the `npages` value, when `free memory` is greater than `thres`. KSM_NPAGES_DECAY=-50 KSM_NPAGES_MIN - is the lower limit for the `npages` value. KSM_NPAGES_MIN=64 KSM_NPAGES_MAX - is the upper limit for the `npages` value. KSM_NPAGES_MAX=1250 KSM_THRES_COEF - is the RAM percentage to be calculated in parameter `thres`. KSM_THRES_COEF=20 KSM_THRES_CONST - If this is a low memory system, and the `thres` value is less than `KSM_THRES_CONST`, then reset `thres` value to `KSM_THRES_CONST` value. KSM_THRES_CONST=2048 uncomment the following to enable ksmtuned debug information LOGFILE=/var/log/ksmtuned DEBUG=1",
"virsh node-memory-tune --shm-pages-to-scan number",
"systemctl stop ksmtuned Stopping ksmtuned: [ OK ] systemctl stop ksm Stopping ksm: [ OK ]",
"systemctl disable ksm systemctl disable ksmtuned",
"echo 2 >/sys/kernel/mm/ksm/run"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/chap-KSM
|
Chapter 9. PrometheusRule [monitoring.coreos.com/v1]
|
Chapter 9. PrometheusRule [monitoring.coreos.com/v1] Description The PrometheusRule custom resource definition (CRD) defines [alerting]( https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ ) and [recording]( https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/ ) rules to be evaluated by Prometheus or ThanosRuler objects. Prometheus and ThanosRuler objects select PrometheusRule objects using label and namespace selectors. Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of desired alerting rule definitions for Prometheus. 9.1.1. .spec Description Specification of desired alerting rule definitions for Prometheus. Type object Property Type Description groups array Content of Prometheus rule file groups[] object RuleGroup is a list of sequentially evaluated recording and alerting rules. 9.1.2. .spec.groups Description Content of Prometheus rule file Type array 9.1.3. .spec.groups[] Description RuleGroup is a list of sequentially evaluated recording and alerting rules. Type object Required name Property Type Description interval string Interval determines how often rules in the group are evaluated. limit integer Limit the number of alerts an alerting rule and series a recording rule can produce. Limit is supported starting with Prometheus >= 2.31 and Thanos Ruler >= 0.24. name string Name of the rule group. partial_response_strategy string PartialResponseStrategy is only used by ThanosRuler and will be ignored by Prometheus instances. More info: https://github.com/thanos-io/thanos/blob/main/docs/components/rule.md#partial-response query_offset string Defines the offset the rule evaluation timestamp of this particular group by the specified duration into the past. It requires Prometheus >= v2.53.0. It is not supported for ThanosRuler. rules array List of alerting and recording rules. rules[] object Rule describes an alerting or recording rule See Prometheus documentation: [alerting]( https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ ) or [recording]( https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules ) rule 9.1.4. .spec.groups[].rules Description List of alerting and recording rules. Type array 9.1.5. .spec.groups[].rules[] Description Rule describes an alerting or recording rule See Prometheus documentation: [alerting]( https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ ) or [recording]( https://www.prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules ) rule Type object Required expr Property Type Description alert string Name of the alert. Must be a valid label value. Only one of record and alert must be set. annotations object (string) Annotations to add to each alert. Only valid for alerting rules. expr integer-or-string PromQL expression to evaluate. for string Alerts are considered firing once they have been returned for this long. keep_firing_for string KeepFiringFor defines how long an alert will continue firing after the condition that triggered it has cleared. labels object (string) Labels to add or overwrite. record string Name of the time series to output to. Must be a valid metric name. Only one of record and alert must be set. 9.2. API endpoints The following API endpoints are available: /apis/monitoring.coreos.com/v1/prometheusrules GET : list objects of kind PrometheusRule /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules DELETE : delete collection of PrometheusRule GET : list objects of kind PrometheusRule POST : create a PrometheusRule /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules/{name} DELETE : delete a PrometheusRule GET : read the specified PrometheusRule PATCH : partially update the specified PrometheusRule PUT : replace the specified PrometheusRule 9.2.1. /apis/monitoring.coreos.com/v1/prometheusrules HTTP method GET Description list objects of kind PrometheusRule Table 9.1. HTTP responses HTTP code Reponse body 200 - OK PrometheusRuleList schema 401 - Unauthorized Empty 9.2.2. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules HTTP method DELETE Description delete collection of PrometheusRule Table 9.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind PrometheusRule Table 9.3. HTTP responses HTTP code Reponse body 200 - OK PrometheusRuleList schema 401 - Unauthorized Empty HTTP method POST Description create a PrometheusRule Table 9.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.5. Body parameters Parameter Type Description body PrometheusRule schema Table 9.6. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 201 - Created PrometheusRule schema 202 - Accepted PrometheusRule schema 401 - Unauthorized Empty 9.2.3. /apis/monitoring.coreos.com/v1/namespaces/{namespace}/prometheusrules/{name} Table 9.7. Global path parameters Parameter Type Description name string name of the PrometheusRule HTTP method DELETE Description delete a PrometheusRule Table 9.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PrometheusRule Table 9.10. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PrometheusRule Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.12. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PrometheusRule Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.14. Body parameters Parameter Type Description body PrometheusRule schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK PrometheusRule schema 201 - Created PrometheusRule schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/monitoring_apis/prometheusrule-monitoring-coreos-com-v1
|
Migrating applications to Red Hat build of Quarkus 3.15
|
Migrating applications to Red Hat build of Quarkus 3.15 Red Hat build of Quarkus 3.15 Red Hat Customer Content Services
|
[
"quarkus -v 3.15.3",
"quarkus update",
"quarkus update --stream=3.15",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:update",
"mvn com.redhat.quarkus.platform:quarkus-maven-plugin:3.15.3.SP1-redhat-00002:update -Dstream=3.15",
"%test.quarkus.oidc.auth-server-url=USD{keycloak.url}/realms/quarkus/",
"%test.quarkus.oidc.auth-server-url=USD{keycloak.url:replaced-by-test-resource}/realms/quarkus/",
"@Inject RemoteCache<String, Book> booksCache; ... QueryFactory queryFactory = Search.getQueryFactory(booksCache); Query query = queryFactory.create(\"from book_sample.Book\"); List<Book> list = query.execute().list();",
"@Inject RemoteCache<String, Book> booksCache; ... Query<Book> query = booksCache.<Book>query(\"from book_sample.Book\"); List<Book> list = query.execute().list();",
"<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-extension-processor</artifactId> <version>USD{quarkus.version}</version> </path> </annotationProcessorPaths> <compilerArgs> <arg>-AlegacyConfigRoot=true</arg> </compilerArgs> </configuration> </plugin>",
"<plugin> <artifactId>maven-compiler-plugin</artifactId> <executions> <execution> <id>default-compile</id> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-extension-processor</artifactId> <version>USD{quarkus.version}</version> </path> </annotationProcessorPaths> <compilerArgs> <arg>-AlegacyConfigRoot=true</arg> </compilerArgs> </configuration> </execution> </executions> </plugin>",
"<build> <plugins> <!-- other plugins --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.13.0</version> <!-- Necessary for proper dependency management in annotationProcessorPaths --> <configuration> <annotationProcessorPaths> <path> <groupId>io.quarkus</groupId> <artifactId>quarkus-panache-common</artifactId> </path> </annotationProcessorPaths> </configuration> </plugin> <!-- other plugins --> </plugins> </build>",
"dependencies { annotationProcessor \"io.quarkus:quarkus-panache-common\" }",
"package org.acme; import org.eclipse.microprofile.reactive.messaging.Incoming; import org.eclipse.microprofile.reactive.messaging.Outgoing; @Incoming(\"source\") @Outgoing(\"sink\") public Result process(int payload) { return new Result(payload); }",
"package org.acme; import io.smallrye.common.annotation.NonBlocking; import org.eclipse.microprofile.reactive.messaging.Incoming; @Incoming(\"source\") @NonBlocking public void consume(int payload) { // called on I/O thread }",
"<properties> <junit-pioneer.version>2.2.0</junit-pioneer.version> </properties>",
"@Path(\"/records\") public class RecordsResource { @Inject HalService halService; @GET @Produces({ MediaType.APPLICATION_JSON, RestMediaType.APPLICATION_HAL_JSON }) @RestLink(rel = \"list\") public HalCollectionWrapper<Record> getAll() { List<Record> list = // HalCollectionWrapper<Record> halCollection = halService.toHalCollectionWrapper( list, \"collectionName\", Record.class); // return halCollection; } @GET @Produces({ MediaType.APPLICATION_JSON, RestMediaType.APPLICATION_HAL_JSON }) @Path(\"/{id}\") @RestLink(rel = \"self\") @InjectRestLinks(RestLinkType.INSTANCE) public HalEntityWrapper<Record> get(@PathParam(\"id\") int id) { Record entity = // HalEntityWrapper<Record> halEntity = halService.toHalWrapper(entity); // return halEntity; } }",
"package io.quarkus.resteasy.reactive.server.test.customproviders; import jakarta.ws.rs.NotFoundException; import jakarta.ws.rs.core.Response; import jakarta.ws.rs.ext.ExceptionMapper; import jakarta.ws.rs.ext.Provider; @Provider public class NotFoundExeptionMapper implements ExceptionMapper<NotFoundException> { @Override public Response toResponse(NotFoundException exception) { return Response.status(404).build(); } }"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html-single/migrating_applications_to_red_hat_build_of_quarkus_3.15/index
|
Chapter 11. Getting started with OptaPlanner in Business Central: An employee rostering example
|
Chapter 11. Getting started with OptaPlanner in Business Central: An employee rostering example You can build and deploy the employee-rostering sample project in Business Central. The project demonstrates how to create each of the Business Central assets required to solve the shift rostering planning problem and use Red Hat build of OptaPlanner to find the best possible solution. You can deploy the preconfigured employee-rostering project in Business Central. Alternatively, you can create the project yourself using Business Central. Note The employee-rostering sample project in Business Central does not include a data set. You must supply a data set in XML format using a REST API call. 11.1. Deploying the employee rostering sample project in Business Central Business Central includes a number of sample projects that you can use to get familiar with the product and its features. The employee rostering sample project is designed and created to demonstrate the shift rostering use case for Red Hat build of OptaPlanner. Use the following procedure to deploy and run the employee rostering sample in Business Central. Prerequisites Red Hat Decision Manager has been downloaded and installed. For installation options, see Planning a Red Hat Decision Manager installation . You have started Red Hat Decision Manager, as described the installation documentation, and you are logged in to Business Central as a user with admin permissions. Procedure In Business Central, click Menu Design Projects . In the preconfigured MySpace space, click Try Samples . Select employee-rostering from the list of sample projects and click Ok in the upper-right corner to import the project. After the asset list has complied, click Build & Deploy to deploy the employee rostering example. The rest of this document explains each of the project assets and their configuration. 11.2. Re-creating the employee rostering sample project The employee rostering sample project is a preconfigured project available in Business Central. You can learn about how to deploy this project in Section 11.1, "Deploying the employee rostering sample project in Business Central" . You can create the employee rostering example "from scratch". You can use the workflow in this example to create a similar project of your own in Business Central. 11.2.1. Setting up the employee rostering project To start developing a solver in Business Central, you must set up the project. Prerequisites Red Hat Decision Manager has been downloaded and installed. You have deployed Business Central and logged in with a user that has the admin role. Procedure Create a new project in Business Central by clicking Menu Design Projects Add Project . In the Add Project window, fill out the following fields: Name : employee-rostering Description (optional): Employee rostering problem optimization using OptaPlanner. Assigns employees to shifts based on their skill. Optional: Click Configure Advanced Options to populate the Group ID , Artifact ID , and Version information. Group ID : employeerostering Artifact ID : employeerostering Version : 1.0.0-SNAPSHOT Click Add to add the project to the Business Central project repository. 11.2.2. Problem facts and planning entities Each of the domain classes in the employee rostering planning problem is categorized as one of the following: An unrelated class: not used by any of the score constraints. From a planning standpoint, this data is obsolete. A problem fact class: used by the score constraints, but does not change during planning (as long as the problem stays the same), for example, Shift and Employee . All the properties of a problem fact class are problem properties. A planning entity class: used by the score constraints and changes during planning, for example, ShiftAssignment . The properties that change during planning are planning variables . The other properties are problem properties. Ask yourself the following questions: What class changes during planning? Which class has variables that I want the Solver to change? That class is a planning entity. A planning entity class needs to be annotated with the @PlanningEntity annotation, or defined in Business Central using the Red Hat build of OptaPlanner dock in the domain designer. Each planning entity class has one or more planning variables , and must also have one or more defining properties. Most use cases have only one planning entity class, and only one planning variable per planning entity class. 11.2.3. Creating the data model for the employee rostering project Use this section to create the data objects required to run the employee rostering sample project in Business Central. Prerequisites You have completed the project setup described in Section 11.2.1, "Setting up the employee rostering project" . Procedure With your new project, either click Data Object in the project perspective, or click Add Asset Data Object to create a new data object. Name the first data object Timeslot , and select employeerostering.employeerostering as the Package . Click Ok . In the Data Objects perspective, click +add field to add fields to the Timeslot data object. In the id field, type endTime . Click the drop-down menu to Type and select LocalDateTime . Click Create and continue to add another field. Add another field with the id startTime and Type LocalDateTime . Click Create . Click Save in the upper-right corner to save the Timeslot data object. Click the x in the upper-right corner to close the Data Objects perspective and return to the Assets menu. Using the steps, create the following data objects and their attributes: Table 11.1. Skill id Type name String Table 11.2. Employee id Type name String skills employeerostering.employeerostering.Skill[List] Table 11.3. Shift id Type requiredSkill employeerostering.employeerostering.Skill timeslot employeerostering.employeerostering.Timeslot Table 11.4. DayOffRequest id Type date LocalDate employee employeerostering.employeerostering.Employee Table 11.5. ShiftAssignment id Type employee employeerostering.employeerostering.Employee shift employeerostering.employeerostering.Shift For more examples of creating data objects, see Getting started with decision services . 11.2.3.1. Creating the employee roster planning entity In order to solve the employee rostering planning problem, you must create a planning entity and a solver. The planning entity is defined in the domain designer using the attributes available in the Red Hat build of OptaPlanner dock. Use the following procedure to define the ShiftAssignment data object as the planning entity for the employee rostering example. Prerequisites You have created the relevant data objects and planning entity required to run the employee rostering example by completing the procedures in Section 11.2.3, "Creating the data model for the employee rostering project" . Procedure From the project Assets menu, open the ShiftAssignment data object. In the Data Objects perspective, open the OptaPlanner dock by clicking the on the right. Select Planning Entity . Select employee from the list of fields under the ShiftAssignment data object. In the OptaPlanner dock, select Planning Variable . In the Value Range Id input field, type employeeRange . This adds the @ValueRangeProvider annotation to the planning entity, which you can view by clicking the Source tab in the designer. The value range of a planning variable is defined with the @ValueRangeProvider annotation. A @ValueRangeProvider annotation always has a property id , which is referenced by the @PlanningVariable property valueRangeProviderRefs . Close the dock and click Save to save the data object. 11.2.3.2. Creating the employee roster planning solution The employee roster problem relies on a defined planning solution. The planning solution is defined in the domain designer using the attributes available in the Red Hat build of OptaPlanner dock. Prerequisites You have created the relevant data objects and planning entity required to run the employee rostering example by completing the procedures in Section 11.2.3, "Creating the data model for the employee rostering project" and Section 11.2.3.1, "Creating the employee roster planning entity" . Procedure Create a new data object with the identifier EmployeeRoster . Create the following fields: Table 11.6. EmployeeRoster id Type dayOffRequestList employeerostering.employeerostering.DayOffRequest[List] shiftAssignmentList employeerostering.employeerostering.ShiftAssignment[List] shiftList employeerostering.employeerostering.Shift[List] skillList employeerostering.employeerostering.Skill[List] timeslotList employeerostering.employeerostering.Timeslot[List] In the Data Objects perspective, open the OptaPlanner dock by clicking the on the right. Select Planning Solution . Leave the default Hard soft score as the Solution Score Type . This automatically generates a score field in the EmployeeRoster data object with the solution score as the type. Add a new field with the following attributes: id Type employeeList employeerostering.employeerostering.Employee[List] With the employeeList field selected, open the OptaPlanner dock and select the Planning Value Range Provider box. In the id field, type employeeRange . Close the dock. Click Save in the upper-right corner to save the asset. 11.2.4. Employee rostering constraints Employee rostering is a planning problem. All planning problems include constraints that must be satisfied in order to find an optimal solution. The employee rostering sample project in Business Central includes the following hard and soft constraints: Hard constraint Employees are only assigned one shift per day. All shifts that require a particular employee skill are assigned an employee with that particular skill. Soft constraints All employees are assigned a shift. If an employee requests a day off, their shift is reassigned to another employee. Hard and soft constraints are defined in Business Central using either the free-form DRL designer, or using guided rules. 11.2.4.1. DRL (Drools Rule Language) rules DRL (Drools Rule Language) rules are business rules that you define directly in .drl text files. These DRL files are the source in which all other rule assets in Business Central are ultimately rendered. You can create and manage DRL files within the Business Central interface, or create them externally as part of a Maven or Java project using Red Hat CodeReady Studio or another integrated development environment (IDE). A DRL file can contain one or more rules that define at a minimum the rule conditions ( when ) and actions ( then ). The DRL designer in Business Central provides syntax highlighting for Java, DRL, and XML. DRL files consist of the following components: Components in a DRL file The following example DRL rule determines the age limit in a loan application decision service: Example rule for loan application age limit A DRL file can contain single or multiple rules, queries, and functions, and can define resource declarations such as imports, globals, and attributes that are assigned and used by your rules and queries. The DRL package must be listed at the top of a DRL file and the rules are typically listed last. All other DRL components can follow any order. Each rule must have a unique name within the rule package. If you use the same rule name more than once in any DRL file in the package, the rules fail to compile. Always enclose rule names with double quotation marks ( rule "rule name" ) to prevent possible compilation errors, especially if you use spaces in rule names. All data objects related to a DRL rule must be in the same project package as the DRL file in Business Central. Assets in the same package are imported by default. Existing assets in other packages can be imported with the DRL rule. 11.2.4.2. Defining constraints for employee rostering using the DRL designer You can create constraint definitions for the employee rostering example using the free-form DRL designer in Business Central. Use this procedure to create a hard constraint where no employee is assigned a shift that begins less than 10 hours after their shift ended. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset DRL file . In the DRL file name field, type ComplexScoreRules . Select the employeerostering.employeerostering package. Click +Ok to create the DRL file. In the Model tab of the DRL designer, define the Employee10HourShiftSpace rule as a DRL file: Click Save to save the DRL file. For more information about creating DRL files, see Designing a decision service using DRL rules . 11.2.5. Creating rules for employee rostering using guided rules You can create rules that define hard and soft constraints for employee rostering using the guided rules designer in Business Central. 11.2.5.1. Guided rules Guided rules are business rules that you create in a UI-based guided rules designer in Business Central that leads you through the rule-creation process. The guided rules designer provides fields and options for acceptable input based on the data objects for the rule being defined. The guided rules that you define are compiled into Drools Rule Language (DRL) rules as with all other rule assets. All data objects related to a guided rule must be in the same project package as the guided rule. Assets in the same package are imported by default. After you create the necessary data objects and the guided rule, you can use the Data Objects tab of the guided rules designer to verify that all required data objects are listed or to import other existing data objects by adding a New item . 11.2.5.2. Creating a guided rule to balance employee shift numbers The BalanceEmployeesShiftNumber guided rule creates a soft constraint that ensures shifts are assigned to employees in a way that is balanced as evenly as possible. It does this by creating a score penalty that increases when shift distribution is less even. The score formula, implemented by the rule, incentivizes the Solver to distribute shifts in a more balanced way. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter BalanceEmployeesShiftNumber as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Employee in the Add a condition to the rule window. Click +Ok . Click the Employee condition to modify the constraints and add the variable name USDemployee . Add the WHEN condition From Accumulate . Above the From Accumulate condition, click click to add pattern and select Number as the fact type from the drop-down list. Add the variable name USDshiftCount to the Number condition. Below the From Accumulate condition, click click to add pattern and select the ShiftAssignment fact type from the drop-down list. Add the variable name USDshiftAssignment to the ShiftAssignment fact type. Click the ShiftAssignment condition again and from the Add a restriction on a field drop-down list, select employee . Select equal to from the drop-down list to the employee constraint. Click the icon to the drop-down button to add a variable, and click Bound variable in the Field value window. Select USDemployee from the drop-down list. In the Function box type count(USDshiftAssignment) . Add the THEN condition by clicking the in the THEN field. Select Modify Soft Score in the Add a new action window. Click +Ok . Type the following expression into the box: -(USDshiftCount.intValue()*USDshiftCount.intValue()) Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 11.2.5.3. Creating a guided rule for no more than one shift per day The OneEmployeeShiftPerDay guided rule creates a hard constraint that employees are not assigned more than one shift per day. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter OneEmployeeShiftPerDay as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Free form DRL from the Add a condition to the rule window. In the free form DRL box, type the following condition: USDshiftAssignment : ShiftAssignment( employee != null ) ShiftAssignment( this != USDshiftAssignment , employee == USDshiftAssignment.employee , shift.timeslot.startTime.toLocalDate() == USDshiftAssignment.shift.timeslot.startTime.toLocalDate() ) This condition states that a shift cannot be assigned to an employee that already has another shift assignment on the same day. Add the THEN condition by clicking the in the THEN field. Select Add free form DRL from the Add a new action window. In the free form DRL box, type the following condition: scoreHolder.addHardConstraintMatch(kcontext, -1); Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 11.2.5.4. Creating a guided rule to match skills to shift requirements The ShiftReqiredSkillsAreMet guided rule creates a hard constraint that ensures all shifts are assigned an employee with the correct set of skills. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter ShiftReqiredSkillsAreMet as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select ShiftAssignment in the Add a condition to the rule window. Click +Ok . Click the ShiftAssignment condition, and select employee from the Add a restriction on a field drop-down list. In the designer, click the drop-down list to employee and select is not null . Click the ShiftAssignment condition, and click Expression editor . In the designer, click [not bound] to open the Expression editor , and bind the expression to the variable USDrequiredSkill . Click Set . In the designer, to USDrequiredSkill , select shift from the first drop-down list, then requiredSkill from the drop-down list. Click the ShiftAssignment condition, and click Expression editor . In the designer, to [not bound] , select employee from the first drop-down list, then skills from the drop-down list. Leave the drop-down list as Choose . In the drop-down box, change please choose to excludes . Click the icon to excludes , and in the Field value window, click the New formula button. Type USDrequiredSkill into the formula box. Add the THEN condition by clicking the in the THEN field. Select Modify Hard Score in the Add a new action window. Click +Ok . Type -1 into the score actions box. Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 11.2.5.5. Creating a guided rule to manage day off requests The DayOffRequest guided rule creates a soft constraint. This constraint allows a shift to be reassigned to another employee in the event the employee who was originally assigned the shift is no longer able to work that day. In the employee rostering example, this constraint is created using the guided rule designer. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Guided Rule . Enter DayOffRequest as the Guided Rule name and select the employeerostering.employeerostering Package . Click Ok to create the rule asset. Add a WHEN condition by clicking the in the WHEN field. Select Free form DRL from the Add a condition to the rule window. In the free form DRL box, type the following condition: USDdayOffRequest : DayOffRequest( ) ShiftAssignment( employee == USDdayOffRequest.employee , shift.timeslot.startTime.toLocalDate() == USDdayOffRequest.date ) This condition states if a shift is assigned to an employee who has made a day off request, the employee can be unassigned the shift on that day. Add the THEN condition by clicking the in the THEN field. Select Add free form DRL from the Add a new action window. In the free form DRL box, type the following condition: scoreHolder.addSoftConstraintMatch(kcontext, -100); Click Validate in the upper-right corner to check all rule conditions are valid. If the rule validation fails, address any problems described in the error message, review all components in the rule, and try again to validate the rule until the rule passes. Click Save to save the rule. For more information about creating guided rules, see Designing a decision service using guided rules . 11.2.6. Creating a solver configuration for employee rostering You can create and edit Solver configurations in Business Central. The Solver configuration designer creates a solver configuration that can be run after the project is deployed. Prerequisites Red Hat Decision Manager has been downloaded and installed. You have created and configured all of the relevant assets for the employee rostering example. Procedure In Business Central, click Menu Projects , and click your project to open it. In the Assets perspective, click Add Asset Solver configuration In the Create new Solver configuration window, type the name EmployeeRosteringSolverConfig for your Solver and click Ok . This opens the Solver configuration designer. In the Score Director Factory configuration section, define a KIE base that contains scoring rule definitions. The employee rostering sample project uses defaultKieBase . Select one of the KIE sessions defined within the KIE base. The employee rostering sample project uses defaultKieSession . Click Validate in the upper-right corner to check the Score Director Factory configuration is correct. If validation fails, address any problems described in the error message, and try again to validate until the configuration passes. Click Save to save the Solver configuration. 11.2.7. Configuring Solver termination for the employee rostering project You can configure the Solver to terminate after a specified amount of time. By default, the planning engine is given an unlimited time period to solve a problem instance. The employee rostering sample project is set up to run for 30 seconds. Prerequisites You have created all relevant assets for the employee rostering project and created the EmployeeRosteringSolverConfig solver configuration in Business Central as described in Section 11.2.6, "Creating a solver configuration for employee rostering" . Procedure Open the EmployeeRosteringSolverConfig from the Assets perspective. This will open the Solver configuration designer. In the Termination section, click Add to create new termination element within the selected logical group. Select the Time spent termination type from the drop-down list. This is added as an input field in the termination configuration. Use the arrows to the time elements to adjust the amount of time spent to 30 seconds. Click Validate in the upper-right corner to check the Score Director Factory configuration is correct. If validation fails, address any problems described in the error message, and try again to validate until the configuration passes. Click Save to save the Solver configuration. 11.3. Accessing the solver using the REST API After deploying or re-creating the sample solver, you can access it using the REST API. You must register a solver instance using the REST API. Then you can supply data sets and retrieve optimized solutions. Prerequisites The employee rostering project is set up and deployed according to the sections in this document. You can either deploy the sample project, as described in Section 11.1, "Deploying the employee rostering sample project in Business Central" , or re-create the project, as described in Section 11.2, "Re-creating the employee rostering sample project" . 11.3.1. Registering the Solver using the REST API You must register the solver instance using the REST API before you can use the solver. Each solver instance is capable of optimizing one planning problem at a time. Procedure Create a HTTP request using the following header: Register the Solver using the following request: PUT http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver Request body <solver-instance> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> </solver-instance> 11.3.2. Calling the Solver using the REST API After registering the solver instance, you can use the REST API to submit a data set to the solver and to retrieve an optimized solution. Procedure Create a HTTP request using the following header: Submit a request to the Solver with a data set, as in the following example: POST http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver/state/solving Request body <employeerostering.employeerostering.EmployeeRoster> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference="../../shiftList/employeerostering.employeerostering.Shift/timeslot"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift[3]"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference="../../../shiftList/employeerostering.employeerostering.Shift[2]"/> </employeerostering.employeerostering.ShiftAssignment> </shiftAssignmentList> </employeerostering.employeerostering.EmployeeRoster> Request the best solution to the planning problem: GET http://localhost:8080/kie-server/services/rest/server/containers/employeerostering_1.0.0-SNAPSHOT/solvers/EmployeeRosteringSolver/bestsolution Example response <solver-instance> <container-id>employee-rostering</container-id> <solver-id>solver1</solver-id> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> <status>NOT_SOLVING</status> <score scoreClass="org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore">0hard/0soft</score> <best-solution class="employeerostering.employeerostering.EmployeeRoster"> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference="../../employeerostering.employeerostering.Shift/timeslot"/> <requiredSkill reference="../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill"/> <employeerostering.employeerostering.Skill reference="../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference="../../shiftList/employeerostering.employeerostering.Shift/timeslot"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList/> <score>0hard/0soft</score> </best-solution> </solver-instance>
|
[
"package import function // Optional query // Optional declare // Optional global // Optional rule \"rule name\" // Attributes when // Conditions then // Actions end rule \"rule2 name\"",
"rule \"Underage\" salience 15 agenda-group \"applicationGroup\" when USDapplication : LoanApplication() Applicant( age < 21 ) then USDapplication.setApproved( false ); USDapplication.setExplanation( \"Underage\" ); end",
"package employeerostering.employeerostering; rule \"Employee10HourShiftSpace\" when USDshiftAssignment : ShiftAssignment( USDemployee : employee != null, USDshiftEndDateTime : shift.timeslot.endTime) ShiftAssignment( this != USDshiftAssignment, USDemployee == employee, USDshiftEndDateTime <= shift.timeslot.endTime, USDshiftEndDateTime.until(shift.timeslot.startTime, java.time.temporal.ChronoUnit.HOURS) <10) then scoreHolder.addHardConstraintMatch(kcontext, -1); end",
"USDshiftAssignment : ShiftAssignment( employee != null ) ShiftAssignment( this != USDshiftAssignment , employee == USDshiftAssignment.employee , shift.timeslot.startTime.toLocalDate() == USDshiftAssignment.shift.timeslot.startTime.toLocalDate() )",
"scoreHolder.addHardConstraintMatch(kcontext, -1);",
"USDdayOffRequest : DayOffRequest( ) ShiftAssignment( employee == USDdayOffRequest.employee , shift.timeslot.startTime.toLocalDate() == USDdayOffRequest.date )",
"scoreHolder.addSoftConstraintMatch(kcontext, -100);",
"authorization: admin:admin X-KIE-ContentType: xstream content-type: application/xml",
"<solver-instance> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> </solver-instance>",
"authorization: admin:admin X-KIE-ContentType: xstream content-type: application/xml",
"<employeerostering.employeerostering.EmployeeRoster> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference=\"../../shiftList/employeerostering.employeerostering.Shift/timeslot\"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift\"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift[3]\"/> </employeerostering.employeerostering.ShiftAssignment> <employeerostering.employeerostering.ShiftAssignment> <shift reference=\"../../../shiftList/employeerostering.employeerostering.Shift[2]\"/> </employeerostering.employeerostering.ShiftAssignment> </shiftAssignmentList> </employeerostering.employeerostering.EmployeeRoster>",
"<solver-instance> <container-id>employee-rostering</container-id> <solver-id>solver1</solver-id> <solver-config-file>employeerostering/employeerostering/EmployeeRosteringSolverConfig.solver.xml</solver-config-file> <status>NOT_SOLVING</status> <score scoreClass=\"org.optaplanner.core.api.score.buildin.hardsoft.HardSoftScore\">0hard/0soft</score> <best-solution class=\"employeerostering.employeerostering.EmployeeRoster\"> <employeeList> <employeerostering.employeerostering.Employee> <name>John</name> <skills> <employeerostering.employeerostering.Skill> <name>reading</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Mary</name> <skills> <employeerostering.employeerostering.Skill> <name>writing</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> <employeerostering.employeerostering.Employee> <name>Petr</name> <skills> <employeerostering.employeerostering.Skill> <name>speaking</name> </employeerostering.employeerostering.Skill> </skills> </employeerostering.employeerostering.Employee> </employeeList> <shiftList> <employeerostering.employeerostering.Shift> <timeslot> <startTime>2017-01-01T00:00:00</startTime> <endTime>2017-01-01T01:00:00</endTime> </timeslot> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> <employeerostering.employeerostering.Shift> <timeslot reference=\"../../employeerostering.employeerostering.Shift/timeslot\"/> <requiredSkill reference=\"../../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </employeerostering.employeerostering.Shift> </shiftList> <skillList> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[3]/skills/employeerostering.employeerostering.Skill\"/> <employeerostering.employeerostering.Skill reference=\"../../employeeList/employeerostering.employeerostering.Employee[2]/skills/employeerostering.employeerostering.Skill\"/> </skillList> <timeslotList> <employeerostering.employeerostering.Timeslot reference=\"../../shiftList/employeerostering.employeerostering.Shift/timeslot\"/> </timeslotList> <dayOffRequestList/> <shiftAssignmentList/> <score>0hard/0soft</score> </best-solution> </solver-instance>"
] |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_decision_manager/workbench-er-tutorial-con
|
7.8 Release Notes
|
7.8 Release Notes Red Hat Enterprise Linux 7 Release Notes for Red Hat Enterprise Linux 7.8 Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.8_release_notes/index
|
Chapter 4. Profile [tuned.openshift.io/v1]
|
Chapter 4. Profile [tuned.openshift.io/v1] Description Profile is a specification for a Profile resource. Type object 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object ProfileStatus is the status for a Profile resource; the status is for internal use only and its fields may be changed/removed in the future. 4.1.1. .spec Description Type object Required config Property Type Description config object profile array Tuned profiles. profile[] object A Tuned profile. 4.1.2. .spec.config Description Type object Required tunedProfile Property Type Description debug boolean option to debug TuneD daemon execution providerName string Name of the cloud provider as taken from the Node providerID: <ProviderName>://<ProviderSpecificNodeID> tunedConfig object Global configuration for the TuneD daemon as defined in tuned-main.conf tunedProfile string TuneD profile to apply 4.1.3. .spec.config.tunedConfig Description Global configuration for the TuneD daemon as defined in tuned-main.conf Type object Property Type Description reapply_sysctl boolean turn reapply_sysctl functionality on/off for the TuneD daemon: true/false 4.1.4. .spec.profile Description Tuned profiles. Type array 4.1.5. .spec.profile[] Description A Tuned profile. Type object Required data name Property Type Description data string Specification of the Tuned profile to be consumed by the Tuned daemon. name string Name of the Tuned profile to be used in the recommend section. 4.1.6. .status Description ProfileStatus is the status for a Profile resource; the status is for internal use only and its fields may be changed/removed in the future. Type object Required tunedProfile Property Type Description conditions array conditions represents the state of the per-node Profile application conditions[] object ProfileStatusCondition represents a partial state of the per-node Profile application. tunedProfile string the current profile in use by the Tuned daemon 4.1.7. .status.conditions Description conditions represents the state of the per-node Profile application Type array 4.1.8. .status.conditions[] Description ProfileStatusCondition represents a partial state of the per-node Profile application. Type object Required lastTransitionTime status type Property Type Description lastTransitionTime string lastTransitionTime is the time of the last update to the current status property. message string message provides additional information about the current condition. This is only to be consumed by humans. reason string reason is the CamelCase reason for the condition's current status. status string status of the condition, one of True, False, Unknown. type string type specifies the aspect reported by this condition. 4.2. API endpoints The following API endpoints are available: /apis/tuned.openshift.io/v1/profiles GET : list objects of kind Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles DELETE : delete collection of Profile GET : list objects of kind Profile POST : create a Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name} DELETE : delete a Profile GET : read the specified Profile PATCH : partially update the specified Profile PUT : replace the specified Profile /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name}/status GET : read status of the specified Profile PATCH : partially update status of the specified Profile PUT : replace status of the specified Profile 4.2.1. /apis/tuned.openshift.io/v1/profiles HTTP method GET Description list objects of kind Profile Table 4.1. HTTP responses HTTP code Reponse body 200 - OK ProfileList schema 401 - Unauthorized Empty 4.2.2. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles HTTP method DELETE Description delete collection of Profile Table 4.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Profile Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ProfileList schema 401 - Unauthorized Empty HTTP method POST Description create a Profile Table 4.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.5. Body parameters Parameter Type Description body Profile schema Table 4.6. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 202 - Accepted Profile schema 401 - Unauthorized Empty 4.2.3. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name} Table 4.7. Global path parameters Parameter Type Description name string name of the Profile HTTP method DELETE Description delete a Profile Table 4.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Profile Table 4.10. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Profile Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.12. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Profile Table 4.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.14. Body parameters Parameter Type Description body Profile schema Table 4.15. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 401 - Unauthorized Empty 4.2.4. /apis/tuned.openshift.io/v1/namespaces/{namespace}/profiles/{name}/status Table 4.16. Global path parameters Parameter Type Description name string name of the Profile HTTP method GET Description read status of the specified Profile Table 4.17. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Profile Table 4.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.19. HTTP responses HTTP code Reponse body 200 - OK Profile schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Profile Table 4.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.21. Body parameters Parameter Type Description body Profile schema Table 4.22. HTTP responses HTTP code Reponse body 200 - OK Profile schema 201 - Created Profile schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/node_apis/profile-tuned-openshift-io-v1
|
16.3. Search Transformation SQL
|
16.3. Search Transformation SQL The Teiid Designer provides a search capability to string values present in transformation SQL text. To search for string values in your transformations SQL: Click Search > Teiid Designer > Transformations... action on the main toolbar which which opens the Search Transformations dialog. Specify a string segment in the Find: field and specify/change your case sensitive preference. Select Perform Search button. Any transformation object containing SQL text which contains occurrences of your string will be displayed in the results section. You can select individual objects and view the SQL. If a table or view supports updates and there is insert, update or delete SQL present, you can expand the object and select the individual SQL type as shown below. If you wish to view the selected object and its SQL in a Model Editor , you can click the Edit button. An editor will be opened if not already open. If an editor is open its tab will be selected. In addition, the Transformation Editor will be opened and you can perform Find/Replace (Ctrl-F) actions to highlight your original searched text string and edit your SQL if you wish.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/search_transformation_sql
|
Chapter 1. Introduction to persistent storage in Red Hat OpenStack Platform (RHOSP)
|
Chapter 1. Introduction to persistent storage in Red Hat OpenStack Platform (RHOSP) Within Red Hat OpenStack Platform, storage is provided by three main services: Block Storage ( openstack-cinder ) Object Storage ( openstack-swift ) Shared File System Storage ( openstack-manila ) These services provide different types of persistent storage, each with its own set of advantages in different use cases. This guide discusses the suitability of each for general enterprise storage requirements. You can manage cloud storage by using either the RHOSP dashboard or the command-line clients. You can perform most procedures by using either method. However, you can complete some of the more advanced procedures only on the command line. This guide provides procedures for the dashboard where possible. Note For the complete suite of documentation for Red Hat OpenStack Platform, see Red Hat OpenStack Platform Documentation . Important This guide documents the use of crudini to apply some custom service settings. As such, you need to install the crudini package first: RHOSP recognizes two types of storage: ephemeral and persistent . Ephemeral storage is storage that is associated only to a specific Compute instance. Once that instance is terminated, so is its ephemeral storage. This type of storage is useful for basic runtime requirements, such as storing the instance's operating system. Persistent storage, is designed to survive (persist) independent of any running instance. This storage is used for any data that needs to be reused, either by different instances or beyond the life of a specific instance. RHOSP uses the following types of persistent storage: Volumes The OpenStack Block Storage service ( openstack-cinder ) allows users to access block storage devices through volumes . Users can attach volumes to instances in order to augment their ephemeral storage with general-purpose persistent storage. Volumes can be detached and re-attached to instances at will, and can only be accessed through the instance they are attached to. You can also configure instances to not use ephemeral storage. Instead of using ephemeral storage, you can configure the Block Storage service to write images to a volume. You can then use the volume as a bootable root volume for an instance. Volumes also provide inherent redundancy and disaster recovery through backups and snapshots. In addition, you can also encrypt volumes for added security. Containers The OpenStack Object Storage service (openstack-swift) provides a fully-distributed storage solution used to store any kind of static data or binary object, such as media files, large datasets, and disk images. The Object Storage service organizes these objects by using containers. Although the content of a volume can be accessed only through instances, the objects inside a container can be accessed through the Object Storage REST API. As such, the Object Storage service can be used as a repository by nearly every service within the cloud. Shares The Shared File Systems service ( openstack-manila ) provides the means to easily provision remote, shareable file systems, or shares . Shares allow projects within the cloud to openly share storage, and can be consumed by multiple instances simultaneously. Each storage type is designed to address specific storage requirements. Containers are designed for wide access, and as such feature the highest throughput, access, and fault tolerance among all storage types. Container usage is geared more towards services. On the other hand, volumes are used primarily for instance consumption. They do not enjoy the same level of access and performance as containers, but they do have a larger feature set and have more native security features than containers. Shares are similar to volumes in this regard, except that they can be consumed by multiple instances. The following sections discuss each storage type's architecture and feature set in detail, within the context of specific storage criteria. 1.1. Scalability and back-end storage In general, a clustered storage solution provides greater back-end scalability. For example, when you use Red Hat Ceph as a Block Storage (cinder) back end, you can scale storage capacity and redundancy by adding more Ceph Object Storage Daemon (OSD) nodes. Block Storage, Object Storage (swift) and Shared File Systems Storage (manila) services support Red Hat Ceph Storage as a back end. The Block Storage service can use multiple storage solutions as discrete back ends. At the back-end level, you can scale capacity by adding more back ends and restarting the service. The Block Storage service also features a large list of supported back-end solutions, some of which feature additional scalability features. By default, the Object Storage service uses the file system on configured storage nodes, and it can use as much space as is available. The Object Storage service supports the XFS and ext4 file systems, and both can be scaled up to consume as much underlying block storage as is available. You can also scale capacity by adding more storage devices to the storage node. The Shared File Systems service provisions file shares from designated storage pools that are managed by one or more third-party back-end storage systems. You can scale this shared storage by increasing the size or number of storage pools available to the service or by adding more third-party back-end storage systems to the deployment. 1.2. Storage accessibility and administration Volumes are consumed only through instances, and can only be attached to and mounted within one instance at a time. Users can create snapshots of volumes, which can be used for cloning or restoring a volume to a state (see Section 1.4, "Storage redundancy and disaster recovery" ). The Block Storage service also allows you to create volume types , which aggregate volume settings (for example, size and back end) that can be easily invoked by users when creating new volumes. These types can be further associated with Quality-of-Service specifications, which allow you to create different storage tiers for users. Like volumes, shares are consumed through instances. However, shares can be directly mounted within an instance, and do not need to be attached through the dashboard or CLI. Shares can also be mounted by multiple instances simultaneously. The Shared File Systems service also supports share snapshots and cloning; you can also create share types to aggregate settings (similar to volume types). Objects in a container are accessible via API, and can be made accessible to instances and services within the cloud. This makes them ideal as object repositories for services; for example, the Image service ( openstack-glance ) can store its images in containers managed by the Object Storage service. 1.3. Storage security The Block Storage service (cinder) provides basic data security through volume encryption. With this, you can configure a volume type to be encrypted through a static key; the key is then used to encrypt all volumes that are created from the configured volume type. For more information, see Section 2.7, "Block Storage service (cinder) volume encryption" . Object and container security is configured at the service and node level. The Object Storage service (swift) provides no native encryption for containers and objects. Rather, the Object Storage service prioritizes accessibility within the cloud, and as such relies solely on the cloud network security to protect object data. The Shared File Systems service (manila) can secure shares through access restriction, whether by instance IP, user or group, or TLS certificate. In addition, some Shared File Systems service deployments can feature separate share servers to manage the relationship between share networks and shares; some share servers support, or even require, additional network security. For example, a CIFS share server requires the deployment of an LDAP, Active Directory, or Kerberos authentication service. For more information about how to secure the Image service (glance), such as image signing and verification and metadata definition (metadef) API restrictions, see Image service in the Creating and Managing Images guide. 1.4. Storage redundancy and disaster recovery The Block Storage service (cinder) features volume backup and restoration, which provides basic disaster recovery for user storage. Use backups to protect volume contents. The service also supports snapshots. In addition to cloning, you can use snapshots to restore a volume to a state. In a multi-back end environment, you can also migrate volumes between back ends. This is useful if you need to take a back end offline for maintenance. Backups are typically stored in a storage back end separate from their source volumes to help protect the data. This is not possible with snapshots because snapshots are dependent on their source volumes. The Block Storage service also supports the creation of consistency groups to group volumes together for simultaneous snapshot creation. This provides a greater level of data consistency across multiple volumes. For more information, see Section 2.9, "Block Storage service (cinder) consistency groups" . The Object Storage service (swift) provides no built-in backup features. You must perform all backups at the file system or node level. The Object Storage service features more robust redundancy and fault tolerance, even the most basic deployment of the Object Storage service replicates objects multiple times. You can use failover features like dm-multipath to enhance redundancy. The Shared File Systems service provides no built-in backup features for shares, but it does allow you to create snapshots for cloning and restoration.
|
[
"dnf install crudini -y"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/storage_guide/assembly-introduction-to-persistent-storage-in-rhosp_osp-storage-guide
|
4.3. Editing an Image Builder blueprint in the web console interface
|
4.3. Editing an Image Builder blueprint in the web console interface To change the specifications for a custom system image, edit the corresponding blueprint. Prerequisites You have opened the Image Builder interface of the RHEL 7 web console in a browser. A blueprint exists. Procedure 1. Locate the blueprint that you want to edit by entering its name or a part of it into the search box at top left, and press Enter . The search is added to the list of filters under the text entry field, and the list of blueprints below is reduced to these that match the search. If the list of blueprints is too long, add further search terms in the same way. 2. On the right side of the blueprint, press the Edit Blueprint button that belongs to the blueprint. The view changes to the blueprint editing screen. 3. Remove unwanted components by clicking the ⫶ button at the far right of its entry in the right pane, and select Remove in the menu. 4. Change version of existing components: i. On the Blueprint Components search field, enter component name or a part of it into the field under the heading Blueprint Components and press Enter . The search is added to the list of filters under the text entry field, and the list of components below is reduced to these that match the search. If the list of components is too long, add further search terms in the same way. ii. Click the ⫶ button at the far right of the component entry, and select View in the menu. A component details screen opens in the right pane. iii. Select the desired version in the Version Release drop-down menu and click Apply Change in top right. The change is saved and the right pane returns to listing the blueprint components. 5. Add new components: i. On the left, enter component name or a part of it into the field under the heading Available Components and press Enter. The search is added to the list of filters under the text entry field, and the list of components below is reduced to these that match the search. If the list of components is too long, add further search terms in the same way. ii. The list of components is paged. To move to other result pages, use the arrows and entry field above the component list. iii. Click on the name of the component you intend to use to display its details. The right pane fills with details of the components, such as its version and dependencies. iv. Select the version you want to use in the Component Options box, with the Version Release drop-down menu. v. Click Add in the top right. vi. If you added a component by mistake, remove it by clicking the ⫶ button at the far right of its entry in the right pane, and select Remove in the menu. Note If you do not intend to select version for some components, you can skip the component details screen and version selection by clicking the + buttons on the right side of the component list. 1. Commit a new version of the blueprint with your changes: i. Click the Commit button in top right. A pop-up window with a summary of your changes appears. ii. Review your changes and confirm them by clicking Commit. A small pop-up on the right informs you of the saving progress and then result. A new version of the blueprint is created. iii. In the top left, click Back to Blueprints to exit the editing screen. The Image Builder view opens, listing existing blueprints.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter4-section_3
|
Chapter 3. Configuring multi-architecture compute machines on an OpenShift cluster
|
Chapter 3. Configuring multi-architecture compute machines on an OpenShift cluster 3.1. About clusters with multi-architecture compute machines An OpenShift Container Platform cluster with multi-architecture compute machines is a cluster that supports compute machines with different architectures. Note When there are nodes with multiple architectures in your cluster, the architecture of your image must be consistent with the architecture of the node. You need to ensure that the pod is assigned to the node with the appropriate architecture and that it matches the image architecture. For more information on assigning pods to nodes, see Assigning pods to nodes . Important The Cluster Samples Operator is not supported on clusters with multi-architecture compute machines. Your cluster can be created without this capability. For more information, see Cluster capabilities . For information on migrating your single-architecture cluster to a cluster that supports multi-architecture compute machines, see Migrating to a cluster with multi-architecture compute machines . 3.1.1. Configuring your cluster with multi-architecture compute machines To create a cluster with multi-architecture compute machines with different installation options and platforms, you can use the documentation in the following table: Table 3.1. Cluster with multi-architecture compute machine installation options Documentation section Platform User-provisioned installation Installer-provisioned installation Control Plane Compute node Creating a cluster with multi-architecture compute machines on Azure Microsoft Azure [✓] aarch64 or x86_64 aarch64 , x86_64 Creating a cluster with multi-architecture compute machines on AWS Amazon Web Services (AWS) [✓] aarch64 or x86_64 aarch64 , x86_64 Creating a cluster with multi-architecture compute machines on GCP Google Cloud Platform (GCP) [✓] aarch64 or x86_64 aarch64 , x86_64 Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z Bare metal [✓] aarch64 or x86_64 aarch64 , x86_64 IBM Power [✓] x86_64 or ppc64le x86_64 , ppc64le IBM Z [✓] x86_64 or s390x x86_64 , s390x Creating a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE with z/VM IBM Z(R) and IBM(R) LinuxONE [✓] x86_64 x86_64 , s390x Creating a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE with RHEL KVM IBM Z(R) and IBM(R) LinuxONE [✓] x86_64 x86_64 , s390x Creating a cluster with multi-architecture compute machines on IBM Power(R) IBM Power(R) [✓] x86_64 x86_64 , ppc64le Important Autoscaling from zero is currently not supported on Google Cloud Platform (GCP). 3.2. Creating a cluster with multi-architecture compute machine on Azure To deploy an Azure cluster with multi-architecture compute machines, you must first create a single-architecture Azure installer-provisioned cluster that uses the multi-architecture installer binary. For more information on Azure installations, see Installing a cluster on Azure with customizations . You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines . After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster. 3.2.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.2.2. Creating a 64-bit ARM boot image using the Azure image gallery The following procedure describes how to manually generate a 64-bit ARM boot image. Prerequisites You installed the Azure CLI ( az ). You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary. Procedure Log in to your Azure account: USD az login Create a storage account and upload the aarch64 virtual hard disk (VHD) to your storage account. The OpenShift Container Platform installation program creates a resource group, however, the boot image can also be uploaded to a custom named resource group: USD az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1 1 The westus object is an example region. Create a storage container using the storage account you generated: USD az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} You must use the OpenShift Container Platform installation program JSON file to extract the URL and aarch64 VHD name: Extract the URL field and set it to RHCOS_VHD_ORIGIN_URL as the file name by running the following command: USD RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".url') Extract the aarch64 VHD name and set it to BLOB_NAME as the file name by running the following command: USD BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64."rhel-coreos-extensions"."azure-disk".release')-azure.aarch64.vhd Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container with the following commands: USD end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'` USD sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv` Copy the RHCOS VHD into the storage container: USD az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token "USDsas" \ --source-uri "USD{RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "USD{BLOB_NAME}" --destination-container USD{CONTAINER_NAME} You can check the status of the copying process with the following command: USD az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy Example output { "completionTime": null, "destinationSnapshot": null, "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d", "incrementalCopy": null, "progress": "17179869696/17179869696", "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd", "status": "success", 1 "statusDescription": null } 1 If the status parameter displays the success object, the copying process is complete. Create an image gallery using the following command: USD az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} Use the image gallery to create an image definition. In the following example command, rhcos-arm64 is the name of the image definition. USD az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2 To get the URL of the VHD and set it to RHCOS_VHD_URL as the file name, run the following command: USD RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n "USD{BLOB_NAME}" -o tsv) Use the RHCOS_VHD_URL file, your storage account, resource group, and image gallery to create an image version. In the following example, 1.0.0 is the image version. USD az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL} Your arm64 boot image is now generated. You can access the ID of your image with the following command: USD az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0 The following example image ID is used in the recourseID parameter of the compute machine set: Example resourceID /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 3.2.3. Creating a 64-bit x86 boot image using the Azure image gallery The following procedure describes how to manually generate a 64-bit x86 boot image. Prerequisites You installed the Azure CLI ( az ). You created a single-architecture Azure installer-provisioned cluster with the multi-architecture installer binary. Procedure Log in to your Azure account by running the following command: USD az login Create a storage account and upload the x86_64 virtual hard disk (VHD) to your storage account by running the following command. The OpenShift Container Platform installation program creates a resource group. However, the boot image can also be uploaded to a custom named resource group: USD az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1 1 The westus object is an example region. Create a storage container using the storage account you generated by running the following command: USD az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} Use the OpenShift Container Platform installation program JSON file to extract the URL and x86_64 VHD name: Extract the URL field and set it to RHCOS_VHD_ORIGIN_URL as the file name by running the following command: USD RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".url') Extract the x86_64 VHD name and set it to BLOB_NAME as the file name by running the following command: USD BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64."rhel-coreos-extensions"."azure-disk".release')-azure.x86_64.vhd Generate a shared access signature (SAS) token. Use this token to upload the RHCOS VHD to your storage container by running the following commands: USD end=`date -u -d "30 minutes" '+%Y-%m-%dT%H:%MZ'` USD sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv` Copy the RHCOS VHD into the storage container by running the following command: USD az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token "USDsas" \ --source-uri "USD{RHCOS_VHD_ORIGIN_URL}" \ --destination-blob "USD{BLOB_NAME}" --destination-container USD{CONTAINER_NAME} You can check the status of the copying process by running the following command: USD az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy Example output { "completionTime": null, "destinationSnapshot": null, "id": "1fd97630-03ca-489a-8c4e-cfe839c9627d", "incrementalCopy": null, "progress": "17179869696/17179869696", "source": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd", "status": "success", 1 "statusDescription": null } 1 If the status parameter displays the success object, the copying process is complete. Create an image gallery by running the following command: USD az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} Use the image gallery to create an image definition by running the following command: USD az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-x86_64 --publisher RedHat --offer x86_64 --sku x86_64 --os-type linux --architecture x64 --hyper-v-generation V2 In this example command, rhcos-x86_64 is the name of the image definition. To get the URL of the VHD and set it to RHCOS_VHD_URL as the file name, run the following command: USD RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n "USD{BLOB_NAME}" -o tsv) Use the RHCOS_VHD_URL file, your storage account, resource group, and image gallery to create an image version by running the following command: USD az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL} In this example, 1.0.0 is the image version. Optional: Access the ID of the generated x86_64 boot image by running the following command: USD az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-x86_64 -e 1.0.0 The following example image ID is used in the recourseID parameter of the compute machine set: Example resourceID /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-x86_64/versions/1.0.0 3.2.4. Adding a multi-architecture compute machine set to your Azure cluster After creating a multi-architecture cluster, you can add nodes with different architectures. You can add multi-architecture compute machines to a multi-architecture cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. To create a custom compute machine set on Azure, see "Creating a compute machine set on Azure". Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". Prerequisites You installed the OpenShift CLI ( oc ). You created a 64-bit ARM or 64-bit x86 boot image. You used the installation program to create a 64-bit ARM or 64-bit x86 single-architecture Azure cluster with the multi-architecture installer binary. Procedure Log in to the OpenShift CLI ( oc ). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster. Example MachineSet object for an Azure 64-bit ARM or 64-bit x86 compute node apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: "" publisher: "" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: "" version: "" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: "<zone>" 1 Set the resourceID parameter to either arm64 or amd64 boot image. 2 Set the vmSize parameter to the instance type used in your installation. Some example instance types are Standard_D4ps_v5 or D8ps . Create the compute machine set by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file with compute machine set configuration. For example: arm64-machine-set-0.yaml , or amd64-machine-set-0.yaml . Verification Verify that the new machines are running by running the following command: USD oc get machineset -n openshift-machine-api The output must include the machine set that you created. Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-machine-set-0 2 2 2 2 10m You can check if the nodes are ready and schedulable by running the following command: USD oc get nodes Additional resources Creating a compute machine set on Azure Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator 3.3. Creating a cluster with multi-architecture compute machines on AWS To create an AWS cluster with multi-architecture compute machines, you must first create a single-architecture AWS installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, see Installing a cluster on AWS with customizations . You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines . After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster. 3.3.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.3.2. Adding a multi-architecture compute machine set to your AWS cluster After creating a multi-architecture cluster, you can add nodes with different architectures. You can add multi-architecture compute machines to a multi-architecture cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". Prerequisites You installed the OpenShift CLI ( oc ). You used the installation program to create an 64-bit ARM or 64-bit x86 single-architecture AWS cluster with the multi-architecture installer binary. Procedure Log in to the OpenShift CLI ( oc ). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster. Example MachineSet object for an AWS 64-bit ARM or x86 compute node apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-subnet-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data 1 2 3 9 13 14 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI ( oc ) installed, you can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath="{.status.infrastructureName}{'\n'}" infrastructure cluster 4 7 Specify the infrastructure ID, role node label, and zone. 5 6 Specify the role node label to add. 8 Specify a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) for your AWS region for the nodes. The RHCOS AMI must be compatible with the machine architecture. USD oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.<arch>.images.aws.regions."<region>".image' 10 Specify a machine type that aligns with the CPU architecture of the chosen AMI. For more information, see "Tested instance types for AWS 64-bit ARM" 11 Specify the zone. For example, us-east-1a . Ensure that the zone you select has machines with the required architecture. 12 Specify the region. For example, us-east-1 . Ensure that the zone you select has machines with the required architecture. Create the compute machine set by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file with compute machine set configuration. For example: aws-arm64-machine-set-0.yaml , or aws-amd64-machine-set-0.yaml . Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api The output must include the machine set that you created. Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-machine-set-0 2 2 2 2 10m You can check if the nodes are ready and schedulable by running the following command: USD oc get nodes Additional resources Tested instance types for AWS 64-bit ARM Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator 3.4. Creating a cluster with multi-architecture compute machines on GCP To create a Google Cloud Platform (GCP) cluster with multi-architecture compute machines, you must first create a single-architecture GCP installer-provisioned cluster with the multi-architecture installer binary. For more information on AWS installations, see Installing a cluster on GCP with customizations . You can also migrate your current cluster with single-architecture compute machines to a cluster with multi-architecture compute machines. For more information, see Migrating to a cluster with multi-architecture compute machines . After creating a multi-architecture cluster, you can add nodes with different architectures to the cluster. Note Secure booting is currently not supported on 64-bit ARM machines for GCP 3.4.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.4.2. Adding a multi-architecture compute machine set to your GCP cluster After creating a multi-architecture cluster, you can add nodes with different architectures. You can add multi-architecture compute machines to a multi-architecture cluster in the following ways: Adding 64-bit x86 compute machines to a cluster that uses 64-bit ARM control plane machines and already includes 64-bit ARM compute machines. In this case, 64-bit x86 is considered the secondary architecture. Adding 64-bit ARM compute machines to a cluster that uses 64-bit x86 control plane machines and already includes 64-bit x86 compute machines. In this case, 64-bit ARM is considered the secondary architecture. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig custom resource. For more information, see "Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator". Prerequisites You installed the OpenShift CLI ( oc ). You used the installation program to create a 64-bit x86 or 64-bit ARM single-architecture GCP cluster with the multi-architecture installer binary. Procedure Log in to the OpenShift CLI ( oc ). Create a YAML file, and add the configuration to create a compute machine set to control the 64-bit ARM or 64-bit x86 compute nodes in your cluster. Example MachineSet object for a GCP 64-bit ARM or 64-bit x86 compute node apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: "" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a 1 Specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. You can obtain the infrastructure ID by running the following command: USD oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster 2 Specify the role node label to add. 3 Specify the path to the image that is used in current compute machine sets. You need the project and image name for your path to image. To access the project and image name, run the following command: USD oc get configmap/coreos-bootimages \ -n openshift-machine-config-operator \ -o jsonpath='{.data.stream}' | jq \ -r '.architectures.aarch64.images.gcp' Example output "gcp": { "release": "415.92.202309142014-0", "project": "rhcos-cloud", "name": "rhcos-415-92-202309142014-0-gcp-aarch64" } Use the project and name parameters from the output to create the path to image field in your machine set. The path to the image should follow the following format: USD projects/<project>/global/images/<image_name> 4 Optional: Specify custom metadata in the form of a key:value pair. For example use cases, see the GCP documentation for setting custom metadata . 5 Specify a machine type that aligns with the CPU architecture of the chosen OS image. For more information, see "Tested instance types for GCP on 64-bit ARM infrastructures". 6 Specify the name of the GCP project that you use for your cluster. 7 Specify the region. For example, us-central1 . Ensure that the zone you select has machines with the required architecture. Create the compute machine set by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file with compute machine set configuration. For example: gcp-arm64-machine-set-0.yaml , or gcp-amd64-machine-set-0.yaml . Verification View the list of compute machine sets by running the following command: USD oc get machineset -n openshift-machine-api The output must include the machine set that you created. Example output NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-machine-set-0 2 2 2 2 10m You can check if the nodes are ready and schedulable by running the following command: USD oc get nodes Additional resources Tested instance types for GCP on 64-bit ARM infrastructures Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator 3.5. Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z To create a cluster with multi-architecture compute machines on bare metal ( x86_64 or aarch64 ), IBM Power(R) ( ppc64le ), or IBM Z(R) ( s390x ) you must have an existing single-architecture cluster on one of these platforms. Follow the installations procedures for your platform: Installing a user provisioned cluster on bare metal . You can then add 64-bit ARM compute machines to your OpenShift Container Platform cluster on bare metal. Installing a cluster on IBM Power(R) . You can then add x86_64 compute machines to your OpenShift Container Platform cluster on IBM Power(R). Installing a cluster on IBM Z(R) and IBM(R) LinuxONE . You can then add x86_64 compute machines to your OpenShift Container Platform cluster on IBM Z(R) and IBM(R) LinuxONE. Important The bare metal installer-provisioned infrastructure and the Bare Metal Operator do not support adding secondary architecture nodes during the initial cluster setup. You can add secondary architecture nodes manually only after the initial cluster setup. Before you can add additional compute nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This allows you to add additional nodes to your cluster and deploy a cluster with multi-architecture compute machines. Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.5.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.5.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 3.5.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 3.5.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.6. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with z/VM To create a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE ( s390x ) with z/VM, you must have an existing single-architecture x86_64 cluster. You can then add s390x compute machines to your OpenShift Container Platform cluster. Before you can add s390x nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using a z/VM instance. This will allow you to add s390x nodes to your cluster and deploy a cluster with multi-architecture compute machines. To create an IBM Z(R) or IBM(R) LinuxONE ( s390x ) cluster with multi-architecture compute machines on x86_64 , follow the instructions for Installing a cluster on IBM Z(R) and IBM(R) LinuxONE . You can then add x86_64 compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z . Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.6.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.6.2. Creating RHCOS machines on IBM Z with z/VM You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines running on IBM Z(R) with z/VM and attach them to your existing cluster. Prerequisites You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: USD curl -k http://<http_server>/worker.ign Download the RHEL live kernel , initramfs , and rootfs files by running the following commands: USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') Move the downloaded RHEL live kernel , initramfs , and rootfs files to an HTTP or HTTPS server that is accessible from the RHCOS guest you want to add. Create a parameter file for the guest. The following parameters are specific for the virtual machine: Optional: To specify a static IP address, add an ip= parameter with the following entries, with each separated by a colon: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. The value none . For coreos.inst.ignition_url= , specify the URL to the worker.ign file. Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. You can adjust further parameters if required. The following is an example parameter file, additional-worker-dasd.parm : cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure that you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing, repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/sda . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. You can adjust further parameters if required. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Machine configuration . The following is an example parameter file, additional-worker-fcp.parm for a worker node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/sda \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure that you have no newline characters. Transfer the initramfs , kernel , parameter files, and RHCOS images to z/VM, for example, by using FTP. For details about how to transfer the files with FTP and boot from the virtual reader, see Booting the installation on IBM Z(R) to install RHEL in z/VM . Punch the files to the virtual reader of the z/VM guest virtual machine. See PUNCH in IBM(R) Documentation. Tip You can use the CP PUNCH command or, if you use Linux, the vmur command to transfer files between two z/VM guest virtual machines. Log in to CMS on the bootstrap machine. IPL the bootstrap machine from the reader by running the following command: See IPL in IBM(R) Documentation. 3.6.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.7. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE in an LPAR To create a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE ( s390x ) in an LPAR, you must have an existing single-architecture x86_64 cluster. You can then add s390x compute machines to your OpenShift Container Platform cluster. Before you can add s390x nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using an LPAR instance. This will allow you to add s390x nodes to your cluster and deploy a cluster with multi-architecture compute machines. Note To create an IBM Z(R) or IBM(R) LinuxONE ( s390x ) cluster with multi-architecture compute machines on x86_64 , follow the instructions for Installing a cluster on IBM Z(R) and IBM(R) LinuxONE . You can then add x86_64 compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z . 3.7.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.7.2. Creating RHCOS machines on IBM Z in an LPAR You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines running on IBM Z(R) in a logical partition (LPAR) and attach them to your existing cluster. Prerequisites You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. You have an HTTP or HTTPS server running on your provisioning machine that is accessible to the machines you create. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: USD curl -k http://<http_server>/worker.ign Download the RHEL live kernel , initramfs , and rootfs files by running the following commands: USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') Move the downloaded RHEL live kernel , initramfs , and rootfs files to an HTTP or HTTPS server that is accessible from the RHCOS guest you want to add. Create a parameter file for the guest. The following parameters are specific for the virtual machine: Optional: To specify a static IP address, add an ip= parameter with the following entries, with each separated by a colon: The IP address for the machine. An empty string. The gateway. The netmask. The machine host and domain name in the form hostname.domainname . Omit this value to let RHCOS decide. The network interface name. Omit this value to let RHCOS decide. The value none . For coreos.inst.ignition_url= , specify the URL to the worker.ign file. Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. For installations on DASD-type disks, complete the following tasks: For coreos.inst.install_dev= , specify /dev/dasda . Use rd.dasd= to specify the DASD where RHCOS is to be installed. You can adjust further parameters if required. The following is an example parameter file, additional-worker-dasd.parm : cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/dasda \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ rd.dasd=0.0.3490 \ zfcp.allow_lun_scan=0 Write all options in the parameter file as a single line and make sure that you have no newline characters. For installations on FCP-type disks, complete the following tasks: Use rd.zfcp=<adapter>,<wwpn>,<lun> to specify the FCP disk where RHCOS is to be installed. For multipathing, repeat this step for each additional path. Note When you install with multiple paths, you must enable multipathing directly after the installation, not at a later point in time, as this can cause problems. Set the install device as: coreos.inst.install_dev=/dev/sda . Note If additional LUNs are configured with NPIV, FCP requires zfcp.allow_lun_scan=0 . If you must enable zfcp.allow_lun_scan=1 because you use a CSI driver, for example, you must configure your NPIV so that each node cannot access the boot partition of another node. You can adjust further parameters if required. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Machine configuration . The following is an example parameter file, additional-worker-fcp.parm for a worker node with multipathing: cio_ignore=all,!condev rd.neednet=1 \ console=ttysclp0 \ coreos.inst.install_dev=/dev/sda \ coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img \ coreos.inst.ignition_url=http://<http_server>/worker.ign \ ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> \ rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 \ zfcp.allow_lun_scan=0 \ rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 \ rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000 Write all options in the parameter file as a single line and make sure that you have no newline characters. Transfer the initramfs, kernel, parameter files, and RHCOS images to the LPAR, for example with FTP. For details about how to transfer the files with FTP and boot, see Booting the installation on IBM Z(R) to install RHEL in an LPAR . Boot the machine 3.7.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.8. Creating a cluster with multi-architecture compute machines on IBM Z and IBM LinuxONE with RHEL KVM To create a cluster with multi-architecture compute machines on IBM Z(R) and IBM(R) LinuxONE ( s390x ) with RHEL KVM, you must have an existing single-architecture x86_64 cluster. You can then add s390x compute machines to your OpenShift Container Platform cluster. Before you can add s390x nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using a RHEL KVM instance. This will allow you to add s390x nodes to your cluster and deploy a cluster with multi-architecture compute machines. To create an IBM Z(R) or IBM(R) LinuxONE ( s390x ) cluster with multi-architecture compute machines on x86_64 , follow the instructions for Installing a cluster on IBM Z(R) and IBM(R) LinuxONE . You can then add x86_64 compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z . Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.8.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.8.2. Creating RHCOS machines using virt-install You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using virt-install . Prerequisites You have at least one LPAR running on RHEL 8.7 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. You have a domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URL of this file. You can validate that the Ignition file is available on the URL. The following example gets the Ignition config file for the compute node: USD curl -k http://<HTTP_server>/worker.ign Download the RHEL live kernel , initramfs , and rootfs files by running the following commands: USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location') USD curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' \ | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location') Move the downloaded RHEL live kernel , initramfs , and rootfs files to an HTTP or HTTPS server before you launch virt-install . Create the new KVM guest nodes using the RHEL kernel , initramfs , and Ignition files; the new disk image; and adjusted parm line arguments. USD virt-install \ --connect qemu:///system \ --name <vm_name> \ --autostart \ --os-variant rhel9.4 \ 1 --cpu host \ --vcpus <vcpus> \ --memory <memory_mb> \ --disk <vm_name>.qcow2,size=<image_size> \ --network network=<virt_network_parm> \ --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \ 2 --extra-args "rd.neednet=1" \ --extra-args "coreos.inst.install_dev=/dev/vda" \ --extra-args "coreos.inst.ignition_url=http://<http_server>/worker.ign " \ 3 --extra-args "coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img" \ 4 --extra-args "ip=<ip>::<gateway>:<netmask>:<hostname>::none" \ 5 --extra-args "nameserver=<dns>" \ --extra-args "console=ttysclp0" \ --noautoconsole \ --wait 1 For os-variant , specify the RHEL version for the RHCOS compute machine. rhel9.4 is the recommended version. To query the supported RHEL version of your operating system, run the following command: USD osinfo-query os -f short-id Note The os-variant is case sensitive. 2 For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. 3 Specify the location of the worker.ign config file. Only HTTP and HTTPS protocols are supported. 4 Specify the location of the rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported 5 Optional: For hostname , specify the fully qualified hostname of the client machine. Note If you are using HAProxy as a load balancer, update your HAProxy rules for ingress-router-443 and ingress-router-80 in the /etc/haproxy/haproxy.cfg configuration file. Continue to create more compute machines for your cluster. 3.8.3. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.9. Creating a cluster with multi-architecture compute machines on IBM Power To create a cluster with multi-architecture compute machines on IBM Power(R) ( ppc64le ), you must have an existing single-architecture ( x86_64 ) cluster. You can then add ppc64le compute machines to your OpenShift Container Platform cluster. Important Before you can add ppc64le nodes to your cluster, you must upgrade your cluster to one that uses the multi-architecture payload. For more information on migrating to the multi-architecture payload, see Migrating to a cluster with multi-architecture compute machines . The following procedures explain how to create a RHCOS compute machine using an ISO image or network PXE booting. This will allow you to add ppc64le nodes to your cluster and deploy a cluster with multi-architecture compute machines. To create an IBM Power(R) ( ppc64le ) cluster with multi-architecture compute machines on x86_64 , follow the instructions for Installing a cluster on IBM Power(R) . You can then add x86_64 compute machines as described in Creating a cluster with multi-architecture compute machines on bare metal, IBM Power, or IBM Z . Note Before adding a secondary architecture node to your cluster, it is recommended to install the Multiarch Tuning Operator, and deploy a ClusterPodPlacementConfig object. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.9.1. Verifying cluster compatibility Before you can start adding compute nodes of different architectures to your cluster, you must verify that your cluster is multi-architecture compatible. Prerequisites You installed the OpenShift CLI ( oc ). Note When using multiple architectures, hosts for OpenShift Container Platform nodes must share the same storage layer. If they do not have the same storage layer, use a storage provider such as nfs-provisioner . Note You should limit the number of network hops between the compute and control plane as much as possible. Procedure Log in to the OpenShift CLI ( oc ). You can check that your cluster uses the architecture payload by running the following command: USD oc adm release info -o jsonpath="{ .metadata.metadata}" Verification If you see the following output, your cluster is using the multi-architecture payload: { "release.openshift.io/architecture": "multi", "url": "https://access.redhat.com/errata/<errata_version>" } You can then begin adding multi-arch compute nodes to your cluster. If you see the following output, your cluster is not using the multi-architecture payload: { "url": "https://access.redhat.com/errata/<errata_version>" } Important To migrate your cluster so the cluster supports multi-architecture compute machines, follow the procedure in Migrating to a cluster with multi-architecture compute machines . 3.9.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 3.9.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + ppc64le ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on ppc64le architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on ppc64le : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 3.9.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.31.3 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.31.3 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.31.3 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.31.3 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.31.3 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.31.3 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.31.3 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.10. Managing a cluster with multi-architecture compute machines Managing a cluster that has nodes with multiple architectures requires you to consider node architecture as you monitor the cluster and manage your workloads. This requires you to take additional considerations into account when you configure cluster resource requirements and behavior, or schedule workloads in a multi-architecture cluster. 3.10.1. Scheduling workloads on clusters with multi-architecture compute machines When you deploy workloads on a cluster with compute nodes that use different architectures, you must align pod architecture with the architecture of the underlying node. Your workload may also require additional configuration to particular resources depending on the underlying node architecture. You can use the Multiarch Tuning Operator to enable architecture-aware scheduling of workloads on clusters with multi-architecture compute machines. The Multiarch Tuning Operator implements additional scheduler predicates in the pods specifications based on the architectures that the pods can support at creation time. 3.10.1.1. Sample multi-architecture node workload deployments Scheduling a workload to an appropriate node based on architecture works in the same way as scheduling based on any other node characteristic. Consider the following options when determining how to schedule your workloads. Using nodeAffinity to schedule nodes with specific architectures You can allow a workload to be scheduled on only a set of nodes with architectures supported by its images, you can set the spec.affinity.nodeAffinity field in your pod's template specification. apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64 1 Specify the supported architectures. Valid values include amd64 , arm64 , or both values. Tainting each node for a specific architecture You can taint a node to avoid the node scheduling workloads that are incompatible with its architecture. When your cluster uses a MachineSet object, you can add parameters to the .spec.template.spec.taints field to avoid workloads being scheduled on nodes with non-supported architectures. Before you add a taint to a node, you must scale down the MachineSet object or remove existing available machines. For more information, see Modifying a compute machine set . apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... spec: # ... template: # ... spec: # ... taints: - effect: NoSchedule key: multiarch.openshift.io/arch value: arm64 You can also set a taint on a specific node by running the following command: USD oc adm taint nodes <node-name> multiarch.openshift.io/arch=arm64:NoSchedule Creating a default toleration in a namespace When a node or machine set has a taint, only workloads that tolerate that taint can be scheduled. You can annotate a namespace so all of the workloads get the same default toleration by running the following command: USD oc annotate namespace my-namespace \ 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{"operator": "Exists", "effect": "NoSchedule", "key": "multiarch.openshift.io/arch"}]' Tolerating architecture taints in workloads When a node or machine set has a taint, only workloads that tolerate that taint can be scheduled. You can configure your workload with a toleration so that it is scheduled on nodes with specific architecture taints. apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: tolerations: - key: "multiarch.openshift.io/arch" value: "arm64" operator: "Equal" effect: "NoSchedule" This example deployment can be scheduled on nodes and machine sets that have the multiarch.openshift.io/arch=arm64 taint specified. Using node affinity with taints and tolerations When a scheduler computes the set of nodes to schedule a pod, tolerations can broaden the set while node affinity restricts the set. If you set a taint on nodes that have a specific architecture, you must also add a toleration to workloads that you want to be scheduled there. apiVersion: apps/v1 kind: Deployment metadata: # ... spec: # ... template: # ... spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: "multiarch.openshift.io/arch" value: "arm64" operator: "Equal" effect: "NoSchedule" Additional resources Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator Controlling pod placement using node taints Controlling pod placement on nodes using node affinity Controlling pod placement using the scheduler Modifying a compute machine set 3.10.2. Enabling 64k pages on the Red Hat Enterprise Linux CoreOS (RHCOS) kernel You can enable the 64k memory page in the Red Hat Enterprise Linux CoreOS (RHCOS) kernel on the 64-bit ARM compute machines in your cluster. The 64k page size kernel specification can be used for large GPU or high memory workloads. This is done using the Machine Config Operator (MCO) which uses a machine config pool to update the kernel. To enable 64k page sizes, you must dedicate a machine config pool for ARM64 to enable on the kernel. Important Using 64k pages is exclusive to 64-bit ARM architecture compute nodes or clusters installed on 64-bit ARM machines. If you configure the 64k pages kernel on a machine config pool using 64-bit x86 machines, the machine config pool and MCO will degrade. Prerequisites You installed the OpenShift CLI ( oc ). You created a cluster with compute nodes of different architecture on one of the supported platforms. Procedure Label the nodes where you want to run the 64k page size kernel: USD oc label node <node_name> <label> Example command USD oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages= Create a machine config pool that contains the worker role that uses the ARM64 architecture and the worker-64k-pages role: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-64k-pages spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-64k-pages nodeSelector: matchLabels: node-role.kubernetes.io/worker-64k-pages: "" kubernetes.io/arch: arm64 Create a machine config on your compute node to enable 64k-pages with the 64k-pages parameter. USD oc create -f <filename>.yaml Example MachineConfig apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker-64k-pages" 1 name: 99-worker-64kpages spec: kernelType: 64k-pages 2 1 Specify the value of the machineconfiguration.openshift.io/role label in the custom machine config pool. The example MachineConfig uses the worker-64k-pages label to enable 64k pages in the worker-64k-pages pool. 2 Specify your desired kernel type. Valid values are 64k-pages and default Note The 64k-pages type is supported on only 64-bit ARM architecture based compute nodes. The realtime type is supported on only 64-bit x86 architecture based compute nodes. Verification To view your new worker-64k-pages machine config pool, run the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m 3.10.3. Importing manifest lists in image streams on your multi-architecture compute machines On an OpenShift Container Platform 4.18 cluster with multi-architecture compute machines, the image streams in the cluster do not import manifest lists automatically. You must manually change the default importMode option to the PreserveOriginal option in order to import the manifest list. Prerequisites You installed the OpenShift Container Platform CLI ( oc ). Procedure The following example command shows how to patch the ImageStream cli-artifacts so that the cli-artifacts:latest image stream tag is imported as a manifest list. USD oc patch is/cli-artifacts -n openshift -p '{"spec":{"tags":[{"name":"latest","importPolicy":{"importMode":"PreserveOriginal"}}]}}' Verification You can check that the manifest lists imported properly by inspecting the image stream tag. The following command will list the individual architecture manifests for a particular tag. USD oc get istag cli-artifacts:latest -n openshift -oyaml If the dockerImageManifests object is present, then the manifest list import was successful. Example output of the dockerImageManifests object dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux 3.11. Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator The Multiarch Tuning Operator optimizes workload management within multi-architecture clusters and in single-architecture clusters transitioning to multi-architecture environments. Architecture-aware workload scheduling allows the scheduler to place pods onto nodes that match the architecture of the pod images. By default, the scheduler does not consider the architecture of a pod's container images when determining the placement of new pods onto nodes. To enable architecture-aware workload scheduling, you must create the ClusterPodPlacementConfig object. When you create the ClusterPodPlacementConfig object, the Multiarch Tuning Operator deploys the necessary operands to support architecture-aware workload scheduling. You can also use the nodeAffinityScoring plugin in the ClusterPodPlacementConfig object to set cluster-wide scores for node architectures. If you enable the nodeAffinityScoring plugin, the scheduler first filters nodes with compatible architectures and then places the pod on the node with the highest score. When a pod is created, the operands perform the following actions: Add the multiarch.openshift.io/scheduling-gate scheduling gate that prevents the scheduling of the pod. Compute a scheduling predicate that includes the supported architecture values for the kubernetes.io/arch label. Integrate the scheduling predicate as a nodeAffinity requirement in the pod specification. Remove the scheduling gate from the pod. Important Note the following operand behaviors: If the nodeSelector field is already configured with the kubernetes.io/arch label for a workload, the operand does not update the nodeAffinity field for that workload. If the nodeSelector field is not configured with the kubernetes.io/arch label for a workload, the operand updates the nodeAffinity field for that workload. However, in that nodeAffinity field, the operand updates only the node selector terms that are not configured with the kubernetes.io/arch label. If the nodeName field is already set, the Multiarch Tuning Operator does not process the pod. If the pod is owned by a DaemonSet, the operand does not update the the nodeAffinity field. If both nodeSelector or nodeAffinity and preferredAffinity fields are set for the kubernetes.io/arch label, the operand does not update the nodeAffinity field. If only nodeSelector or nodeAffinity field is set for the kubernetes.io/arch label and the nodeAffinityScoring plugin is disabled, the operand does not update the nodeAffinity field. If the nodeAffinity.preferredDuringSchedulingIgnoredDuringExecution field already contains terms that score nodes based on the kubernetes.io/arch label, the operand ignores the configuration in the nodeAffinityScoring plugin. 3.11.1. Installing the Multiarch Tuning Operator by using the CLI You can install the Multiarch Tuning Operator by using the OpenShift CLI ( oc ). Prerequisites You have installed oc . You have logged in to oc as a user with cluster-admin privileges. Procedure Create a new project named openshift-multiarch-tuning-operator by running the following command: USD oc create ns openshift-multiarch-tuning-operator Create an OperatorGroup object: Create a YAML file with the configuration for creating an OperatorGroup object. Example YAML configuration for creating an OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: {} Create the OperatorGroup object by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file that contains the OperatorGroup object configuration. Create a Subscription object: Create a YAML file with the configuration for creating a Subscription object. Example YAML configuration for creating a Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: channel: stable name: multiarch-tuning-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic startingCSV: multiarch-tuning-operator.<version> Create the Subscription object by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the YAML file that contains the Subscription object configuration. Note For more details about configuring the Subscription object and OperatorGroup object, see "Installing from OperatorHub using the CLI". Verification To verify that the Multiarch Tuning Operator is installed, run the following command: USD oc get csv -n openshift-multiarch-tuning-operator Example output NAME DISPLAY VERSION REPLACES PHASE multiarch-tuning-operator.<version> Multiarch Tuning Operator <version> multiarch-tuning-operator.1.0.0 Succeeded The installation is successful if the Operator is in Succeeded phase. Optional: To verify that the OperatorGroup object is created, run the following command: USD oc get operatorgroup -n openshift-multiarch-tuning-operator Example output NAME AGE openshift-multiarch-tuning-operator-q8zbb 133m Optional: To verify that the Subscription object is created, run the following command: USD oc get subscription -n openshift-multiarch-tuning-operator Example output NAME PACKAGE SOURCE CHANNEL multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable Additional resources Installing from OperatorHub using the CLI 3.11.2. Installing the Multiarch Tuning Operator by using the web console You can install the Multiarch Tuning Operator by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Enter Multiarch Tuning Operator in the search field. Click Multiarch Tuning Operator . Select the Multiarch Tuning Operator version from the Version list. Click Install Set the following options on the Operator Installation page: Set Update Channel to stable . Set Installation Mode to All namespaces on the cluster . Set Installed Namespace to Operator recommended Namespace or Select a Namespace . The recommended Operator namespace is openshift-multiarch-tuning-operator . If the openshift-multiarch-tuning-operator namespace does not exist, it is created during the operator installation. If you select Select a namespace , you must select a namespace for the Operator from the Select Project list. Update approval as Automatic or Manual . If you select Automatic updates, Operator Lifecycle Manager (OLM) automatically updates the running instance of the Multiarch Tuning Operator without any intervention. If you select Manual updates, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Multiarch Tuning Operator to a newer version. Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox. Click Install . Verification Navigate to Operators Installed Operators . Verify that the Multiarch Tuning Operator is listed with the Status field as Succeeded in the openshift-multiarch-tuning-operator namespace. 3.11.3. Multiarch Tuning Operator pod labels and architecture support overview After installing the Multiarch Tuning Operator, you can verify the multi-architecture support for workloads in your cluster. You can identify and manage pods based on their architecture compatibility by using the pod labels. These labels are automatically set on the newly created pods to provide insights into their architecture support. The following table describes the labels that the Multiarch Tuning Operator adds when you create a pod: Table 3.2. Pod labels that the Multiarch Tuning Operator adds when you create a pod Label Description multiarch.openshift.io/multi-arch: "" The pod supports multiple architectures. multiarch.openshift.io/single-arch: "" The pod supports only a single architecture. multiarch.openshift.io/arm64: "" The pod supports the arm64 architecture. multiarch.openshift.io/amd64: "" The pod supports the amd64 architecture. multiarch.openshift.io/ppc64le: "" The pod supports the ppc64le architecture. multiarch.openshift.io/s390x: "" The pod supports the s390x architecture. multirach.openshift.io/node-affinity: set The Operator has set the node affinity requirement for the architecture. multirach.openshift.io/node-affinity: not-set The Operator did not set the node affinity requirement. For example, when the pod already has a node affinity for the architecture, the Multiarch Tuning Operator adds this label to the pod. multiarch.openshift.io/scheduling-gate: gated The pod is gated. multiarch.openshift.io/scheduling-gate: removed The pod gate has been removed. multiarch.openshift.io/inspection-error: "" An error has occurred while building the node affinity requirements. multiarch.openshift.io/preferred-node-affinity: set The Operator has set the architecture preferences in the pod. multiarch.openshift.io/preferred-node-affinity: not-set The Operator did not set the architecture preferences in the pod because the user had already set them in the preferredDuringSchedulingIgnoredDuringExecution node affinity. 3.11.4. Creating the ClusterPodPlacementConfig object After installing the Multiarch Tuning Operator, you must create the ClusterPodPlacementConfig object. When you create this object, the Multiarch Tuning Operator deploys an operand that enables architecture-aware workload scheduling. Note You can create only one instance of the ClusterPodPlacementConfig object. Example ClusterPodPlacementConfig object configuration apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster 1 spec: logVerbosityLevel: Normal 2 namespaceSelector: 3 matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: 4 nodeAffinityScoring: 5 enabled: true 6 platforms: 7 - architecture: amd64 8 weight: 100 9 - architecture: arm64 weight: 50 1 You must set this field value to cluster . 2 Optional: You can set the field value to Normal , Debug , Trace , or TraceAll . The value is set to Normal by default. 3 Optional: You can configure the namespaceSelector to select the namespaces in which the Multiarch Tuning Operator's pod placement operand must process the nodeAffinity of the pods. All namespaces are considered by default. 4 Optional: Includes a list of plugins for architecture-aware workload scheduling. 5 Optional: You can use this plugin to set architecture preferences for pod placement. When enabled, the scheduler first filters out nodes that do not meet the pod's requirements. Then, it prioritizes the remaining nodes based on the architecture scores defined in the nodeAffinityScoring.platforms field. 6 Optional: Set this field to true to enable the nodeAffinityScoring plugin. The default value is false . 7 Optional: Defines a list of architectures and their corresponding scores. 8 Specify the node architecture to score. The scheduler prioritizes nodes for pod placement based on the architecture scores that you set and the scheduling requirements defined in the pod specification. Accepted values are arm64 , amd64 , ppc64le , or s390x . 9 Assign a score to the architecture. The value for this field must be configured in the range of 1 (lowest priority) to 100 (highest priority). The scheduler uses this score to prioritize nodes for pod placement, favoring nodes with architectures that have higher scores. In this example, the operator field value is set to DoesNotExist . Therefore, if the key field value ( multiarch.openshift.io/exclude-pod-placement ) is set as a label in a namespace, the operand does not process the nodeAffinity of the pods in that namespace. Instead, the operand processes the nodeAffinity of the pods in namespaces that do not contain the label. If you want the operand to process the nodeAffinity of the pods only in specific namespaces, you can configure the namespaceSelector as follows: namespaceSelector: matchExpressions: - key: multiarch.openshift.io/include-pod-placement operator: Exists In this example, the operator field value is set to Exists . Therefore, the operand processes the nodeAffinity of the pods only in namespaces that contain the multiarch.openshift.io/include-pod-placement label. Important This Operator excludes pods in namespaces starting with kube- . It also excludes pods that are expected to be scheduled on control plane nodes. 3.11.4.1. Creating the ClusterPodPlacementConfig object by using the CLI To deploy the pod placement operand that enables architecture-aware workload scheduling, you can create the ClusterPodPlacementConfig object by using the OpenShift CLI ( oc ). Prerequisites You have installed oc . You have logged in to oc as a user with cluster-admin privileges. You have installed the Multiarch Tuning Operator. Procedure Create a ClusterPodPlacementConfig object YAML file: Example ClusterPodPlacementConfig object configuration apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster spec: logVerbosityLevel: Normal namespaceSelector: matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: nodeAffinityScoring: enabled: true platforms: - architecture: amd64 weight: 100 - architecture: arm64 weight: 50 Create the ClusterPodPlacementConfig object by running the following command: USD oc create -f <file_name> 1 1 Replace <file_name> with the name of the ClusterPodPlacementConfig object YAML file. Verification To check that the ClusterPodPlacementConfig object is created, run the following command: USD oc get clusterpodplacementconfig Example output NAME AGE cluster 29s 3.11.4.2. Creating the ClusterPodPlacementConfig object by using the web console To deploy the pod placement operand that enables architecture-aware workload scheduling, you can create the ClusterPodPlacementConfig object by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have installed the Multiarch Tuning Operator. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . On the Installed Operators page, click Multiarch Tuning Operator . Click the Cluster Pod Placement Config tab. Select either Form view or YAML view . Configure the ClusterPodPlacementConfig object parameters. Click Create . Optional: If you want to edit the ClusterPodPlacementConfig object, perform the following actions: Click the Cluster Pod Placement Config tab. Select Edit ClusterPodPlacementConfig from the options menu. Click YAML and edit the ClusterPodPlacementConfig object parameters. Click Save . Verification On the Cluster Pod Placement Config page, check that the ClusterPodPlacementConfig object is in the Ready state. 3.11.5. Deleting the ClusterPodPlacementConfig object by using the CLI You can create only one instance of the ClusterPodPlacementConfig object. If you want to re-create this object, you must first delete the existing instance. You can delete this object by using the OpenShift CLI ( oc ). Prerequisites You have installed oc . You have logged in to oc as a user with cluster-admin privileges. Procedure Log in to the OpenShift CLI ( oc ). Delete the ClusterPodPlacementConfig object by running the following command: USD oc delete clusterpodplacementconfig cluster Verification To check that the ClusterPodPlacementConfig object is deleted, run the following command: USD oc get clusterpodplacementconfig Example output No resources found 3.11.6. Deleting the ClusterPodPlacementConfig object by using the web console You can create only one instance of the ClusterPodPlacementConfig object. If you want to re-create this object, you must first delete the existing instance. You can delete this object by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin privileges. You have access to the OpenShift Container Platform web console. You have created the ClusterPodPlacementConfig object. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . On the Installed Operators page, click Multiarch Tuning Operator . Click the Cluster Pod Placement Config tab. Select Delete ClusterPodPlacementConfig from the options menu. Click Delete . Verification On the Cluster Pod Placement Config page, check that the ClusterPodPlacementConfig object has been deleted. 3.11.7. Uninstalling the Multiarch Tuning Operator by using the CLI You can uninstall the Multiarch Tuning Operator by using the OpenShift CLI ( oc ). Prerequisites You have installed oc . You have logged in to oc as a user with cluster-admin privileges. You deleted the ClusterPodPlacementConfig object. Important You must delete the ClusterPodPlacementConfig object before uninstalling the Multiarch Tuning Operator. Uninstalling the Operator without deleting the ClusterPodPlacementConfig object can lead to unexpected behavior. Procedure Get the Subscription object name for the Multiarch Tuning Operator by running the following command: USD oc get subscription.operators.coreos.com -n <namespace> 1 1 Replace <namespace> with the name of the namespace where you want to uninstall the Multiarch Tuning Operator. Example output NAME PACKAGE SOURCE CHANNEL openshift-multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable Get the currentCSV value for the Multiarch Tuning Operator by running the following command: USD oc get subscription.operators.coreos.com <subscription_name> -n <namespace> -o yaml | grep currentCSV 1 1 Replace <subscription_name> with the Subscription object name. For example: openshift-multiarch-tuning-operator . Replace <namespace> with the name of the namespace where you want to uninstall the Multiarch Tuning Operator. Example output currentCSV: multiarch-tuning-operator.<version> Delete the Subscription object by running the following command: USD oc delete subscription.operators.coreos.com <subscription_name> -n <namespace> 1 1 Replace <subscription_name> with the Subscription object name. Replace <namespace> with the name of the namespace where you want to uninstall the Multiarch Tuning Operator. Example output subscription.operators.coreos.com "openshift-multiarch-tuning-operator" deleted Delete the CSV for the Multiarch Tuning Operator in the target namespace using the currentCSV value by running the following command: USD oc delete clusterserviceversion <currentCSV_value> -n <namespace> 1 1 Replace <currentCSV> with the currentCSV value for the Multiarch Tuning Operator. For example: multiarch-tuning-operator.<version> . Replace <namespace> with the name of the namespace where you want to uninstall the Multiarch Tuning Operator. Example output clusterserviceversion.operators.coreos.com "multiarch-tuning-operator.<version>" deleted Verification To verify that the Multiarch Tuning Operator is uninstalled, run the following command: USD oc get csv -n <namespace> 1 1 Replace <namespace> with the name of the namespace where you have uninstalled the Multiarch Tuning Operator. Example output No resources found in openshift-multiarch-tuning-operator namespace. 3.11.8. Uninstalling the Multiarch Tuning Operator by using the web console You can uninstall the Multiarch Tuning Operator by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster with cluster-admin permissions. You deleted the ClusterPodPlacementConfig object. Important You must delete the ClusterPodPlacementConfig object before uninstalling the Multiarch Tuning Operator. Uninstalling the Operator without deleting the ClusterPodPlacementConfig object can lead to unexpected behavior. Procedure Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Enter Multiarch Tuning Operator in the search field. Click Multiarch Tuning Operator . Click the Details tab. From the Actions menu, select Uninstall Operator . When prompted, click Uninstall . Verification Navigate to Operators Installed Operators . On the Installed Operators page, verify that the Multiarch Tuning Operator is not listed. 3.12. Multiarch Tuning Operator release notes The Multiarch Tuning Operator optimizes workload management within multi-architecture clusters and in single-architecture clusters transitioning to multi-architecture environments. These release notes track the development of the Multiarch Tuning Operator. For more information, see Managing workloads on multi-architecture clusters by using the Multiarch Tuning Operator . 3.12.1. Release notes for the Multiarch Tuning Operator 1.1.0 Issued: 18 March 2024 3.12.1.1. New features and enhancements The Multiarch Tuning Operator is now supported on managed offerings, including ROSA with Hosted Control Planes (HCP) and other HCP environments. With this release, you can configure architecture-aware workload scheduling by using the new plugins field in the ClusterPodPlacementConfig object. You can use the plugins.nodeAffinityScoring field to set architecture preferences for pod placement. If you enable the nodeAffinityScoring plugin, the scheduler first filters out nodes that do not meet the pod requirements. Then, the scheduler prioritizes the remaining nodes based on the architecture scores defined in the nodeAffinityScoring.platforms field. 3.12.1.1.1. Bug fixes With this release, the Multiarch Tuning Operator does not update the nodeAffinity field for pods that are managed by a daemon set. ( OCPBUGS-45885 ) 3.12.2. Release notes for the Multiarch Tuning Operator 1.0.0 Issued: 31 October 2024 3.12.2.1. New features and enhancements With this release, the Multiarch Tuning Operator supports custom network scenarios and cluster-wide custom registries configurations. With this release, you can identify pods based on their architecture compatibility by using the pod labels that the Multiarch Tuning Operator adds to newly created pods. With this release, you can monitor the behavior of the Multiarch Tuning Operator by using the metrics and alerts that are registered in the Cluster Monitoring Operator.
|
[
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"az login",
"az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1",
"az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".url')",
"BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.aarch64.vhd",
"end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`",
"sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`",
"az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}",
"az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy",
"{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }",
"az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}",
"az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --publisher RedHat --offer arm --sku arm64 --os-type linux --architecture Arm64 --hyper-v-generation V2",
"RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)",
"az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}",
"az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-arm64 -e 1.0.0",
"/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0",
"az login",
"az storage account create -n USD{STORAGE_ACCOUNT_NAME} -g USD{RESOURCE_GROUP} -l westus --sku Standard_LRS 1",
"az storage container create -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME}",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".url')",
"BLOB_NAME=rhcos-USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.x86_64.\"rhel-coreos-extensions\".\"azure-disk\".release')-azure.x86_64.vhd",
"end=`date -u -d \"30 minutes\" '+%Y-%m-%dT%H:%MZ'`",
"sas=`az storage container generate-sas -n USD{CONTAINER_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} --https-only --permissions dlrw --expiry USDend -o tsv`",
"az storage blob copy start --account-name USD{STORAGE_ACCOUNT_NAME} --sas-token \"USDsas\" --source-uri \"USD{RHCOS_VHD_ORIGIN_URL}\" --destination-blob \"USD{BLOB_NAME}\" --destination-container USD{CONTAINER_NAME}",
"az storage blob show -c USD{CONTAINER_NAME} -n USD{BLOB_NAME} --account-name USD{STORAGE_ACCOUNT_NAME} | jq .properties.copy",
"{ \"completionTime\": null, \"destinationSnapshot\": null, \"id\": \"1fd97630-03ca-489a-8c4e-cfe839c9627d\", \"incrementalCopy\": null, \"progress\": \"17179869696/17179869696\", \"source\": \"https://rhcos.blob.core.windows.net/imagebucket/rhcos-411.86.202207130959-0-azure.aarch64.vhd\", \"status\": \"success\", 1 \"statusDescription\": null }",
"az sig create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME}",
"az sig image-definition create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-x86_64 --publisher RedHat --offer x86_64 --sku x86_64 --os-type linux --architecture x64 --hyper-v-generation V2",
"RHCOS_VHD_URL=USD(az storage blob url --account-name USD{STORAGE_ACCOUNT_NAME} -c USD{CONTAINER_NAME} -n \"USD{BLOB_NAME}\" -o tsv)",
"az sig image-version create --resource-group USD{RESOURCE_GROUP} --gallery-name USD{GALLERY_NAME} --gallery-image-definition rhcos-arm64 --gallery-image-version 1.0.0 --os-vhd-storage-account USD{STORAGE_ACCOUNT_NAME} --os-vhd-uri USD{RHCOS_VHD_URL}",
"az sig image-version show -r USDGALLERY_NAME -g USDRESOURCE_GROUP -i rhcos-x86_64 -e 1.0.0",
"/resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-x86_64/versions/1.0.0",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker name: <infrastructure_id>-machine-set-0 namespace: openshift-machine-api spec: replicas: 2 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-machine-set-0 spec: lifecycleHooks: {} metadata: {} providerSpec: value: acceleratedNetworking: true apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: azure-cloud-credentials namespace: openshift-machine-api image: offer: \"\" publisher: \"\" resourceID: /resourceGroups/USD{RESOURCE_GROUP}/providers/Microsoft.Compute/galleries/USD{GALLERY_NAME}/images/rhcos-arm64/versions/1.0.0 1 sku: \"\" version: \"\" kind: AzureMachineProviderSpec location: <region> managedIdentity: <infrastructure_id>-identity networkResourceGroup: <infrastructure_id>-rg osDisk: diskSettings: {} diskSizeGB: 128 managedDisk: storageAccountType: Premium_LRS osType: Linux publicIP: false publicLoadBalancer: <infrastructure_id> resourceGroup: <infrastructure_id>-rg subnet: <infrastructure_id>-worker-subnet userDataSecret: name: worker-user-data vmSize: Standard_D4ps_v5 2 vnet: <infrastructure_id>-vnet zone: \"<zone>\"",
"oc create -f <file_name> 1",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-machine-set-0 2 2 2 2 10m",
"oc get nodes",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-aws-machine-set-0 2 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 3 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 4 template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 5 machine.openshift.io/cluster-api-machine-type: <role> 6 machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role>-<zone> 7 spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: ami: id: ami-02a574449d4f4d280 8 apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - ebs: iops: 0 volumeSize: 120 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: <infrastructure_id>-worker-profile 9 instanceType: m6g.xlarge 10 kind: AWSMachineProviderConfig placement: availabilityZone: us-east-1a 11 region: <region> 12 securityGroups: - filters: - name: tag:Name values: - <infrastructure_id>-node 13 subnet: filters: - name: tag:Name values: - <infrastructure_id>-subnet-private-<zone> tags: - name: kubernetes.io/cluster/<infrastructure_id> 14 value: owned - name: <custom_tag_name> value: <custom_tag_value> userDataSecret: name: worker-user-data",
"oc get -o jsonpath=\"{.status.infrastructureName}{'\\n'}\" infrastructure cluster",
"oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.<arch>.images.aws.regions.\"<region>\".image'",
"oc create -f <file_name> 1",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-aws-machine-set-0 2 2 2 2 10m",
"oc get nodes",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> 1 name: <infrastructure_id>-w-a namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a spec: metadata: labels: node-role.kubernetes.io/<role>: \"\" providerSpec: value: apiVersion: gcpprovider.openshift.io/v1beta1 canIPForward: false credentialsSecret: name: gcp-cloud-credentials deletionProtection: false disks: - autoDelete: true boot: true image: <path_to_image> 3 labels: null sizeGb: 128 type: pd-ssd gcpMetadata: 4 - key: <custom_metadata_key> value: <custom_metadata_value> kind: GCPMachineProviderSpec machineType: n1-standard-4 5 metadata: creationTimestamp: null networkInterfaces: - network: <infrastructure_id>-network subnetwork: <infrastructure_id>-worker-subnet projectID: <project_name> 6 region: us-central1 7 serviceAccounts: - email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com scopes: - https://www.googleapis.com/auth/cloud-platform tags: - <infrastructure_id>-worker userDataSecret: name: worker-user-data zone: us-central1-a",
"oc get -o jsonpath='{.status.infrastructureName}{\"\\n\"}' infrastructure cluster",
"oc get configmap/coreos-bootimages -n openshift-machine-config-operator -o jsonpath='{.data.stream}' | jq -r '.architectures.aarch64.images.gcp'",
"\"gcp\": { \"release\": \"415.92.202309142014-0\", \"project\": \"rhcos-cloud\", \"name\": \"rhcos-415-92-202309142014-0-gcp-aarch64\" }",
"projects/<project>/global/images/<image_name>",
"oc create -f <file_name> 1",
"oc get machineset -n openshift-machine-api",
"NAME DESIRED CURRENT READY AVAILABLE AGE <infrastructure_id>-gcp-machine-set-0 2 2 2 2 10m",
"oc get nodes",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<http_server>/worker.ign",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://<http_server>/worker.ign coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"ipl c",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<http_server>/worker.ign",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/dasda coreos.inst.ignition_url=http://<http_server>/worker.ign coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 rd.dasd=0.0.3490 zfcp.allow_lun_scan=0",
"cio_ignore=all,!condev rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=/dev/sda coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.ignition_url=http://<http_server>/worker.ign ip=<ip>::<gateway>:<netmask>:<hostname>::none nameserver=<dns> rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.kernel.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.initramfs.location')",
"curl -LO USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.s390x.artifacts.metal.formats.pxe.rootfs.location')",
"virt-install --connect qemu:///system --name <vm_name> --autostart --os-variant rhel9.4 \\ 1 --cpu host --vcpus <vcpus> --memory <memory_mb> --disk <vm_name>.qcow2,size=<image_size> --network network=<virt_network_parm> --location <media_location>,kernel=<rhcos_kernel>,initrd=<rhcos_initrd> \\ 2 --extra-args \"rd.neednet=1\" --extra-args \"coreos.inst.install_dev=/dev/vda\" --extra-args \"coreos.inst.ignition_url=http://<http_server>/worker.ign \" \\ 3 --extra-args \"coreos.live.rootfs_url=http://<http_server>/rhcos-<version>-live-rootfs.<architecture>.img\" \\ 4 --extra-args \"ip=<ip>::<gateway>:<netmask>:<hostname>::none\" \\ 5 --extra-args \"nameserver=<dns>\" --extra-args \"console=ttysclp0\" --noautoconsole --wait",
"osinfo-query os -f short-id",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"oc adm release info -o jsonpath=\"{ .metadata.metadata}\"",
"{ \"release.openshift.io/architecture\": \"multi\", \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"{ \"url\": \"https://access.redhat.com/errata/<errata_version>\" }",
"oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign",
"curl -k http://<HTTP_server>/worker.ign",
"RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes -o wide",
"NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME worker-0-ppc64le Ready worker 42d v1.31.3 192.168.200.21 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-1-ppc64le Ready worker 42d v1.31.3 192.168.200.20 <none> Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.ppc64le cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-0-x86 Ready control-plane,master 75d v1.31.3 10.248.0.38 10.248.0.38 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-1-x86 Ready control-plane,master 75d v1.31.3 10.248.0.39 10.248.0.39 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 master-2-x86 Ready control-plane,master 75d v1.31.3 10.248.0.40 10.248.0.40 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-0-x86 Ready worker 75d v1.31.3 10.248.0.43 10.248.0.43 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9 worker-1-x86 Ready worker 75d v1.31.3 10.248.0.44 10.248.0.44 Red Hat Enterprise Linux CoreOS 415.92.202309261919-0 (Plow) 5.14.0-284.34.1.el9_2.x86_64 cri-o://1.31.3-3.rhaos4.15.gitb36169e.el9",
"apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: 1 - amd64 - arm64",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # spec: # template: # spec: # taints: - effect: NoSchedule key: multiarch.openshift.io/arch value: arm64",
"oc adm taint nodes <node-name> multiarch.openshift.io/arch=arm64:NoSchedule",
"oc annotate namespace my-namespace 'scheduler.alpha.kubernetes.io/defaultTolerations'='[{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"multiarch.openshift.io/arch\"}]'",
"apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: tolerations: - key: \"multiarch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"",
"apiVersion: apps/v1 kind: Deployment metadata: # spec: # template: # spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - amd64 - arm64 tolerations: - key: \"multiarch.openshift.io/arch\" value: \"arm64\" operator: \"Equal\" effect: \"NoSchedule\"",
"oc label node <node_name> <label>",
"oc label node worker-arm64-01 node-role.kubernetes.io/worker-64k-pages=",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-64k-pages spec: machineConfigSelector: matchExpressions: - key: machineconfiguration.openshift.io/role operator: In values: - worker - worker-64k-pages nodeSelector: matchLabels: node-role.kubernetes.io/worker-64k-pages: \"\" kubernetes.io/arch: arm64",
"oc create -f <filename>.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker-64k-pages\" 1 name: 99-worker-64kpages spec: kernelType: 64k-pages 2",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9d55ac9a91127c36314e1efe7d77fbf8 True False False 3 3 3 0 361d worker rendered-worker-e7b61751c4a5b7ff995d64b967c421ff True False False 7 7 7 0 361d worker-64k-pages rendered-worker-64k-pages-e7b61751c4a5b7ff995d64b967c421ff True False False 2 2 2 0 35m",
"oc patch is/cli-artifacts -n openshift -p '{\"spec\":{\"tags\":[{\"name\":\"latest\",\"importPolicy\":{\"importMode\":\"PreserveOriginal\"}}]}}'",
"oc get istag cli-artifacts:latest -n openshift -oyaml",
"dockerImageManifests: - architecture: amd64 digest: sha256:16d4c96c52923a9968fbfa69425ec703aff711f1db822e4e9788bf5d2bee5d77 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: arm64 digest: sha256:6ec8ad0d897bcdf727531f7d0b716931728999492709d19d8b09f0d90d57f626 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: ppc64le digest: sha256:65949e3a80349cdc42acd8c5b34cde6ebc3241eae8daaeea458498fedb359a6a manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux - architecture: s390x digest: sha256:75f4fa21224b5d5d511bea8f92dfa8e1c00231e5c81ab95e83c3013d245d1719 manifestSize: 1252 mediaType: application/vnd.docker.distribution.manifest.v2+json os: linux",
"oc create ns openshift-multiarch-tuning-operator",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: {}",
"oc create -f <file_name> 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-multiarch-tuning-operator namespace: openshift-multiarch-tuning-operator spec: channel: stable name: multiarch-tuning-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Automatic startingCSV: multiarch-tuning-operator.<version>",
"oc create -f <file_name> 1",
"oc get csv -n openshift-multiarch-tuning-operator",
"NAME DISPLAY VERSION REPLACES PHASE multiarch-tuning-operator.<version> Multiarch Tuning Operator <version> multiarch-tuning-operator.1.0.0 Succeeded",
"oc get operatorgroup -n openshift-multiarch-tuning-operator",
"NAME AGE openshift-multiarch-tuning-operator-q8zbb 133m",
"oc get subscription -n openshift-multiarch-tuning-operator",
"NAME PACKAGE SOURCE CHANNEL multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable",
"apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster 1 spec: logVerbosityLevel: Normal 2 namespaceSelector: 3 matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: 4 nodeAffinityScoring: 5 enabled: true 6 platforms: 7 - architecture: amd64 8 weight: 100 9 - architecture: arm64 weight: 50",
"namespaceSelector: matchExpressions: - key: multiarch.openshift.io/include-pod-placement operator: Exists",
"apiVersion: multiarch.openshift.io/v1beta1 kind: ClusterPodPlacementConfig metadata: name: cluster spec: logVerbosityLevel: Normal namespaceSelector: matchExpressions: - key: multiarch.openshift.io/exclude-pod-placement operator: DoesNotExist plugins: nodeAffinityScoring: enabled: true platforms: - architecture: amd64 weight: 100 - architecture: arm64 weight: 50",
"oc create -f <file_name> 1",
"oc get clusterpodplacementconfig",
"NAME AGE cluster 29s",
"oc delete clusterpodplacementconfig cluster",
"oc get clusterpodplacementconfig",
"No resources found",
"oc get subscription.operators.coreos.com -n <namespace> 1",
"NAME PACKAGE SOURCE CHANNEL openshift-multiarch-tuning-operator multiarch-tuning-operator redhat-operators stable",
"oc get subscription.operators.coreos.com <subscription_name> -n <namespace> -o yaml | grep currentCSV 1",
"currentCSV: multiarch-tuning-operator.<version>",
"oc delete subscription.operators.coreos.com <subscription_name> -n <namespace> 1",
"subscription.operators.coreos.com \"openshift-multiarch-tuning-operator\" deleted",
"oc delete clusterserviceversion <currentCSV_value> -n <namespace> 1",
"clusterserviceversion.operators.coreos.com \"multiarch-tuning-operator.<version>\" deleted",
"oc get csv -n <namespace> 1",
"No resources found in openshift-multiarch-tuning-operator namespace."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/postinstallation_configuration/configuring-multi-architecture-compute-machines-on-an-openshift-cluster
|
function::sprint_ustack
|
function::sprint_ustack Name function::sprint_ustack - Return stack for the current task from string. EXPERIMENTAL! Synopsis Arguments stk String with list of hexadecimal addresses for the current task. Description Perform a symbolic lookup of the addresses in the given string, which is assumed to be the result of a prior call to ubacktrace for the current task. Returns a simple backtrace from the given hex string. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found). Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_ustack.
|
[
"function sprint_ustack:string(stk:string)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-sprint-ustack
|
Provisioning APIs
|
Provisioning APIs OpenShift Container Platform 4.15 Reference guide for provisioning APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/provisioning_apis/index
|
Chapter 4. PolicyKit
|
Chapter 4. PolicyKit The PolicyKit utility is a framework that provides an authorization API used by privileged programs (also called mechanisms ) offering services to unprivileged programs (also called subjects ). The following are details on the changes PolicyKit , or its system name polkit , has undergone. 4.1. Policy Configuration As far as the new features are concerned, authorization rules are now defined in JavaScript .rules files. This means that the same files are used for defining both the rules and the administrator status. Previously, this information was stored in two different file types - *.pkla and *.conf , which used key/value pairs to define additional local authorizations. These new .rules files are stored in two locations; whereas polkit rules for local customization are stored in the /etc/polkit-1/rules.d/ directory, the third party packages are stored in /usr/share/polkit-1/rules.d/ . The existing .conf and .pkla configuration files have been preserved and exist side by side with .rules files. polkit has been upgraded for Red Hat Enterprise Linux 7 with the compatibility issue in mind. The logic in precedence in rules has changed. polkitd now reads .rules files in lexicographic order from the /etc/polkit-1/rules.d and /usr/share/polkit-1/rules.d directories. If two files are named identically, files in /etc are processed before files in /usr . In addition, existing rules are applied by the /etc/polkit-1/rules.d/49-polkit-pkla-compat.rules file. They can therefore be overridden by .rules files in either /usr or /etc with a name that comes before 49-polkit-pkla-compat in lexicographic order. The simplest way to ensure that your old rules are not overridden is to begin the name of all other .rules files with a number higher than 49. Here is an example of a .rules file. It creates a rule that allows mounting a file system on a system device for the storage group. The rule is stored in the /etc/polkit-1/rules.d/10-enable-mount.rules file: Example 4.1. Allow Mounting a File system on a System device polkit.addRule(function(action, subject) { if (action.id == "org.freedesktop.udisks2.filesystem-mount-system" && subject.isInGroup("storage")) { return polkit.Result.YES; } }); For more information, see: polkit (8) - The man page for the description of the JavaScript rules and the precedence rules. pkla-admin-identities (8) and pkla-check-authorization (8) - The man pages for documentation of the .conf and .pkla file formats, respectively.
|
[
"polkit.addRule(function(action, subject) { if (action.id == \"org.freedesktop.udisks2.filesystem-mount-system\" && subject.isInGroup(\"storage\")) { return polkit.Result.YES; } });"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/policykit
|
Chapter 82. ExternalConfigurationEnv schema reference
|
Chapter 82. ExternalConfigurationEnv schema reference Used in: ExternalConfiguration Property Description name Name of the environment variable which will be passed to the Kafka Connect pods. The name of the environment variable cannot start with KAFKA_ or STRIMZI_ . string valueFrom Value of the environment variable which will be passed to the Kafka Connect pods. It can be passed either as a reference to Secret or ConfigMap field. The field has to specify exactly one Secret or ConfigMap. ExternalConfigurationEnvVarSource
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-ExternalConfigurationEnv-reference
|
Deploying OpenShift Data Foundation on VMware vSphere
|
Deploying OpenShift Data Foundation on VMware vSphere Red Hat OpenShift Data Foundation 4.17 Instructions on deploying OpenShift Data Foundation using VMware vSphere infrastructure Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on VMware vSphere clusters. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) vSphere clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. To deploy OpenShift Data Foundation, start with the requirements in the Preparing to deploy OpenShift Data Foundation chapter and then follow any one of the below deployment process for your environment: Internal mode Deploy using dynamic storage devices Deploy using local storage devices Deploy standalone Multicloud Object Gateway External mode Deploying OpenShift Data Foundation in external mode Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic or local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation using dynamic or local storage, ensure that your resource requirements are met. See the Resource requirements section in the Planning guide. Verify the rotational flag on your VMDKs before deploying object storage devices (OSDs) on them. For more information, see the knowledgebase article Override device rotational flag in ODF environment . Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Token authentication method . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with KMS using the Kubernetes authentication method . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. For deploying using local storage devices, see requirements for installing OpenShift Data Foundation using local storage devices . These are not applicable for deployment using dynamic storage devices. 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription. A valid Red Hat Advanced Cluster Management (RHACM) for Kubernetes subscription. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. Arbiter stretch cluster requirements In this case, a single cluster is stretched across two zones with a third zone as the location for the arbiter. This solution is currently intended for deployment in the OpenShift Container Platform on-premises and in the same data center. This solution is not recommended for deployments stretching over multiple data centers. Instead, consider Metro-DR as a first option for no data loss DR solution deployed over multiple data centers with low latency networks. To know in detail how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Note You cannot enable Flexible scaling and Arbiter both at the same time as they have conflicting scaling logic. With Flexible scaling, you can add one node at a time to your OpenShift Data Foundation cluster. Whereas, in an Arbiter cluster, you need to add at least one node in each of the two data zones. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with a minimum configuration when the resource requirement for a standard deployment is not met. For more information, see the Resource requirements section in the Planning guide . Chapter 2. Deploy using dynamic storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by VMware vSphere (disk format: thin) provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Note Both internal and external OpenShift Data Foundation clusters are supported on VMware vSphere. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.3.1. Enabling key rotation when using KMS Security common practices require periodic encryption key rotation. You can enable key rotation when using KMS using this procedure. To enable key rotation, add the annotation keyrotation.csiaddons.openshift.io/schedule: <value> to either Namespace , StorageClass , or PersistentVolumeClaims (in order of precedence). <value> can be either @hourly , @daily , @weekly , @monthly , or @yearly . If <value> is empty, the default is @weekly . The below examples use @weekly . Important Key rotation is only supported for RBD backed volumes. Annotating Namespace Annotating StorageClass Annotating PersistentVolumeClaims 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator . For VMs on VMware, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab . For more information, see Installing on vSphere . Optional: If you want to use thick-provisioned storage for flexibility, you must create a storage class with zeroedthick or eagerzeroedthick disk format. For information, see VMware vSphere object definition . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the Storage Class . By default, it is set to thin . If you have created a storage class with zeroedthick or eagerzeroedthick disk format for thick-provisioned storage, then that storage class is listed in addition to the default, thin storage class. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Spread the worker nodes across three different physical nodes, racks, or failure domains for high availability. Use vCenter anti-affinity to align OpenShift Data Foundation rack labels with physical nodes and racks in the data center to avoid scheduling two worker nodes on the same physical chassis. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of the aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. Select the Taint nodes checkbox to make selected nodes dedicated for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. Chapter 3. Deploy using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Use this section to deploy OpenShift Data Foundation on VMware where OpenShift Container Platform is already installed. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the steps. Installing Local Storage Operator Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating OpenShift Data Foundation cluster on VMware vSphere VMware vSphere supports the following three types of local storage: Virtual machine disk (VMDK) Raw device mapping (RDM) VMDirectPath I/O Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node to use local storage devices on VMware. Ensure that the disk type is SSD, which is the only supported disk type. For VMs on VMware vSphere, ensure the disk.EnableUUID option is set to TRUE . You need to have vCenter account privileges to configure the VMs. For more information, see Required vCenter account privileges . To set the disk.EnableUUID option, use the Advanced option of the VM Options in the Customize hardware tab. For more information, see Installing on vSphere . Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Select one of the following: Disks on all nodes to use the available disks that match the selected filters on all nodes. Disks on selected nodes to use the available disks that match the selected filters only on selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with 3 or more nodes is spread across fewer than the minimum requirement of 3 availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one of the following Encryption level : Cluster-wide encryption to encrypt the entire cluster (block and file). StorageClass encryption to create encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that the Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, the flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. Chapter 4. Verifying OpenShift Data Foundation deployment Use this section to verify that OpenShift Data Foundation is deployed correctly. 4.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) rook-ceph-osd-prepare-ocs-deviceset-* (1 pod for each device) 4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.4. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw Chapter 5. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. You can deploy the Multicloud Object Gateway component either using dynamic storage devices or using the local storage devices. 5.1. Deploy standalone Multicloud Object Gateway using dynamic storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 5.1.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.1.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) 5.2. Deploy standalone Multicloud Object Gateway using local storage devices Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 5.2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 5.2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 5.2.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Create a new StorageClass using the local storage devices option. Click . Note You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Filesystem is selected by default. Always ensure that the Filesystem is selected for Volume Mode . Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 7. Uninstalling OpenShift Data Foundation 7.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
|
[
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get namespace default NAME STATUS AGE default Active 5d2h",
"oc annotate namespace default \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" namespace/default annotated",
"oc get storageclass rbd-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rbd-sc rbd.csi.ceph.com Delete Immediate true 5d2h",
"oc annotate storageclass rbd-sc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" storageclass.storage.k8s.io/rbd-sc annotated",
"oc get pvc data-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-pvc Bound pvc-f37b8582-4b04-4676-88dd-e1b95c6abf74 1Gi RWO default 20h",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=@weekly\" persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642663516 @weekly 3s",
"oc annotate pvc data-pvc \"keyrotation.csiaddons.openshift.io/schedule=*/1 * * * *\" --overwrite=true persistentvolumeclaim/data-pvc annotated",
"oc get encryptionkeyrotationcronjobs.csiaddons.openshift.io NAME SCHEDULE SUSPEND ACTIVE LASTSCHEDULE AGE data-pvc-1642664617 */1 * * * * 3s",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"oc annotate namespace openshift-storage openshift.io/node-selector="
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html-single/deploying_openshift_data_foundation_on_vmware_vsphere/deploy-using-local-storage-devices-vmware
|
Chapter 5. Systems lifecycle in the inventory application
|
Chapter 5. Systems lifecycle in the inventory application A system is a Red Hat Enterprise Linux (RHEL) host that is managed by the Red Hat Insights inventory in the Red Hat Hybrid Cloud Console. System activity is automatically monitored by Red Hat. All systems registered with inventory follow a lifecycle that includes the following states: fresh , stale , and stale warning . The state that a system resides in depends on the last time it was reported by a data collector to the inventory application. Systems are automatically deleted from inventory if they do not report within a given time frame. The goal of the deletion mechanism is to maintain an up-to-date, accurate view of your inventory. Here is a description of each state: Fresh The default configuration requires systems to communicate with Red Hat daily. A system with the status of fresh, is active and is regularly reported to the inventory application. It will be reported by one of the data collectors described in section 1.2. Most systems are in this state during typical operations. Stale A system with the status of stale, has NOT been reported to the inventory application in the last day, which is equivalent to the last 26 hours. Stale warning A system with the status of stale warning, has NOT been reported to the inventory application in the last 14 days. When reaching this state, a system is flagged for automatic deletion. Once a system is removed from inventory it will no longer appear in the inventory application and Insights data analysis results will no longer be available. 5.1. Determining system state in inventory There are two ways to determine which state a system is currently in. 5.1.1. Determining system state in inventory as a user with viewer access If you have Inventory Hosts viewer access, you can view the system state on the Systems page by using the following steps: Prerequisites You have Inventory Hosts viewer access. Procedure Navigate to the Red Hat Insights > RHEL > Inventory page. Click the Filter drop-down list, and select Status . Click the Filter by status drop-down, and choose the states you want to include in your query. Click Reset filters to clear your query. 5.1.2. Determining system state in inventory as a user with administrator access If you have Inventory Hosts administrator access, you can get the system state of any system from the Dashboard by using the following steps: Prerequisites You have Inventory Hosts administrator access. Procedure Navigate to the Red Hat Insights for Red Hat Enterprise Linux dashboard page. Go to the top left of the screen where you can examine the total number of systems that are registered with Insights for Red Hat Enterprise Linux. After the total number, towards the right side of this value, you will see the number of stale systems and the number of systems to be removed . Click either: The stale systems link. The systems to be removed link, if applicable. This opens the inventory page where you view more granular details about the system. 5.2. Modifying system staleness and deletion time limits in inventory By default, system states have the following time limits: Systems are labeled stale if they are not reported in one day. A warning icon displays at the top of the Systems page in the Last seen: field. Systems are labeled stale warning if they are not reported within 7 days. In this case, the Last seen: field turns red. Systems that are not reported in 14 days are deleted. There are situations where a system is offline for an extended time period but is still in use. For example, test environments are often kept offline except when testing. Edge devices, submarines, or Internet of Things (IoT) devices, can be out of range of communication for extended time periods. You can modify the system staleness and deletion time limits to ensure that systems that are offline but still active do not get deleted. Staleness and deletion settings get applied to all of your conventional and immutable systems. Prerequisites You are logged into the Red Hat Hybrid Cloud Console as a user with the Organization staleness and deletion administrator role. Procedure On the Red Hat Hybrid Cloud Console main page, click RHEL in the Red Hat Insights tile. In the left navigation bar, click Inventory > System Configuration > Staleness and Deletion . The Staleness and Deletion page displays the current settings for system staleness, system stale warning, and system deletion for conventional systems. Optional: To manage the staleness and configuration settings for edge (immutable) systems, select the Immutable (OSTree) tab. To change these values, click Edit . The drop-down arrows to each value are now enabled. Click the arrow to the value that you want to change and then select a new value. Note The system stale warning value must be less than the system deletion value. Optional: To revert to the default values for the organization, click Reset . Click Save . Note Setting the system deletion maximum time to less than the current maximum time, deletes systems that have been stale for longer than the new maximum time.
| null |
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory/systems-lifecycle_user-access
|
Chapter 1. Overview of machine management
|
Chapter 1. Overview of machine management You can use machine management to flexibly work with underlying infrastructure such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere to manage the OpenShift Container Platform cluster. You can control the cluster and perform auto-scaling, such as scaling up and down the cluster based on specific workload policies. It is important to have a cluster that adapts to changing workloads. The OpenShift Container Platform cluster can horizontally scale up and down when the load increases or decreases. Machine management is implemented as a custom resource definition (CRD). A CRD object defines a new unique object Kind in the cluster and enables the Kubernetes API server to handle the object's entire lifecycle. The Machine API Operator provisions the following resources: MachineSet Machine ClusterAutoscaler MachineAutoscaler MachineHealthCheck 1.1. Machine API overview The Machine API is a combination of primary resources that are based on the upstream Cluster API project and custom OpenShift Container Platform resources. For OpenShift Container Platform 4.13 clusters, the Machine API performs all node host provisioning management actions after the cluster installation finishes. Because of this system, OpenShift Container Platform 4.13 offers an elastic, dynamic provisioning method on top of public or private cloud infrastructure. The two primary resources are: Machines A fundamental unit that describes the host for a node. A machine has a providerSpec specification, which describes the types of compute nodes that are offered for different cloud platforms. For example, a machine type for a compute node might define a specific machine type and required metadata. Machine sets MachineSet resources are groups of compute machines. Compute machine sets are to compute machines as replica sets are to pods. If you need more compute machines or must scale them down, you change the replicas field on the MachineSet resource to meet your compute need. Warning Control plane machines cannot be managed by compute machine sets. Control plane machine sets provide management capabilities for supported control plane machines that are similar to what compute machine sets provide for compute machines. For more information, see "Managing control plane machines". The following custom resources add more capabilities to your cluster: Machine autoscaler The MachineAutoscaler resource automatically scales compute machines in a cloud. You can set the minimum and maximum scaling boundaries for nodes in a specified compute machine set, and the machine autoscaler maintains that range of nodes. The MachineAutoscaler object takes effect after a ClusterAutoscaler object exists. Both ClusterAutoscaler and MachineAutoscaler resources are made available by the ClusterAutoscalerOperator object. Cluster autoscaler This resource is based on the upstream cluster autoscaler project. In the OpenShift Container Platform implementation, it is integrated with the Machine API by extending the compute machine set API. You can use the cluster autoscaler to manage your cluster in the following ways: Set cluster-wide scaling limits for resources such as cores, nodes, memory, and GPU Set the priority so that the cluster prioritizes pods and new nodes are not brought online for less important pods Set the scaling policy so that you can scale up nodes but not scale them down Machine health check The MachineHealthCheck resource detects when a machine is unhealthy, deletes it, and, on supported platforms, makes a new machine. In OpenShift Container Platform version 3.11, you could not roll out a multi-zone architecture easily because the cluster did not manage machine provisioning. Beginning with OpenShift Container Platform version 4.1, this process is easier. Each compute machine set is scoped to a single zone, so the installation program sends out compute machine sets across availability zones on your behalf. And then because your compute is dynamic, and in the face of a zone failure, you always have a zone for when you must rebalance your machines. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. The autoscaler provides best-effort balancing over the life of a cluster. Additional resources Machine phases and lifecycle 1.2. Managing compute machines As a cluster administrator, you can perform the following actions: Create a compute machine set for the following cloud providers: Alibaba Cloud AWS Azure Azure Stack Hub GCP IBM Cloud IBM Power Virtual Server Nutanix RHOSP RHV vSphere Create a machine set for a bare metal deployment: Creating a compute machine set on bare metal Manually scale a compute machine set by adding or removing a machine from the compute machine set. Modify a compute machine set through the MachineSet YAML configuration file. Delete a machine. Create infrastructure compute machine sets . Configure and deploy a machine health check to automatically fix damaged machines in a machine pool. 1.3. Managing control plane machines As a cluster administrator, you can perform the following actions: Update your control plane configuration with a control plane machine set for the following cloud providers: AWS Azure vSphere Configure and deploy a machine health check to automatically recover unhealthy control plane machines. 1.4. Applying autoscaling to an OpenShift Container Platform cluster You can automatically scale your OpenShift Container Platform cluster to ensure flexibility for changing workloads. To autoscale your cluster, you must first deploy a cluster autoscaler, and then deploy a machine autoscaler for each compute machine set. The cluster autoscaler increases and decreases the size of the cluster based on deployment needs. The machine autoscaler adjusts the number of machines in the compute machine sets that you deploy in your OpenShift Container Platform cluster. 1.5. Adding compute machines on user-provisioned infrastructure User-provisioned infrastructure is an environment where you can deploy infrastructure such as compute, network, and storage resources that host the OpenShift Container Platform. You can add compute machines to a cluster on user-provisioned infrastructure during or after the installation process. 1.6. Adding RHEL compute machines to your cluster As a cluster administrator, you can perform the following actions: Add Red Hat Enterprise Linux (RHEL) compute machines , also known as worker machines, to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster. Add more Red Hat Enterprise Linux (RHEL) compute machines to an existing cluster.
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_management/overview-of-machine-management
|
probe::tcp.disconnect
|
probe::tcp.disconnect Name probe::tcp.disconnect - TCP socket disconnection Synopsis Values saddr A string representing the source IP address daddr A string representing the destination IP address flags TCP flags (e.g. FIN, etc) name Name of this probe sport TCP source port dport TCP destination port sock Network socket Context The process which disconnects tcp
|
[
"tcp.disconnect"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-tcp-disconnect
|
Chapter 5. Creating a basic cluster on Red Hat OpenStack Platform
|
Chapter 5. Creating a basic cluster on Red Hat OpenStack Platform This procedure creates a high availability cluster on an RHOSP platform with no fencing or resources configured. Prerequisites An RHOSP instance is configured for each HA cluster node The HA cluster node is running RHEL 8.7 or later High Availability and RHOSP packages are installed on each node, as described in Installing the high availability and RHOSP packages and agents . Procedure On one of the cluster nodes, enter the following command to authenticate the pcs user hacluster . Specify the name of each node in the cluster. In this example, the nodes for the cluster are node01 , node02 , and node03 . Create the cluster. In this example, the cluster is named newcluster . Verification Enable the cluster. Start the cluster. The command's output indicates whether the cluster has started on each node.
|
[
"pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized",
"pcs cluster setup newcluster node01 node02 node03 Synchronizing pcsd certificates on nodes node01, node02, node03... node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates... node02: Success node03: Success node01: Success",
"pcs cluster enable --all node01: Cluster Enabled node02: Cluster Enabled node03: Cluster Enabled",
"pcs cluster start --all node02: Starting Cluster... node03: Starting Cluster... node01: Starting Cluster"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/creating-a-basic-cluster-on-red-hat-openstack-platform_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
|
Chapter 2. Obtaining and modifying container images
|
Chapter 2. Obtaining and modifying container images A containerized overcloud requires access to a registry with the required container images. This chapter provides information on how to prepare the registry and your undercloud and overcloud configuration to use container images for Red Hat OpenStack Platform. 2.1. Preparing container images The overcloud installation requires an environment file to determine where to obtain container images and how to store them. Generate and customize this environment file that you can use to prepare your container images. Note If you need to configure specific container image versions for your overcloud, you must pin the images to a specific version. For more information, see Pinning container images for the overcloud . Procedure Log in to your undercloud host as the stack user. Generate the default container image preparation file: This command includes the following additional options: --local-push-destination sets the registry on the undercloud as the location for container images. This means that director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. Director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file is containers-prepare-parameter.yaml . Note You can use the same containers-prepare-parameter.yaml file to define a container image source for both the undercloud and the overcloud. Modify the containers-prepare-parameter.yaml to suit your requirements. 2.2. Container image preparation parameters The default file for preparing your containers ( containers-prepare-parameter.yaml ) contains the ContainerImagePrepare heat parameter. This parameter defines a list of strategies for preparing a set of images: Each strategy accepts a set of sub-parameters that defines which images to use and what to do with the images. The following table contains information about the sub-parameters that you can use with each ContainerImagePrepare strategy: Parameter Description excludes List of regular expressions to exclude image names from a strategy. includes List of regular expressions to include in a strategy. At least one image name must match an existing image. All excludes are ignored if includes is specified. modify_append_tag String to append to the tag for the destination image. For example, if you pull an image with the tag 16.2.3-5.161 and set the modify_append_tag to -hotfix , the director tags the final image as 16.2.3-5.161-hotfix. modify_only_with_labels A dictionary of image labels that filter the images that you want to modify. If an image matches the labels defined, the director includes the image in the modification process. modify_role String of ansible role names to run during upload but before pushing the image to the destination registry. modify_vars Dictionary of variables to pass to modify_role . push_destination Defines the namespace of the registry that you want to push images to during the upload process. If set to true , the push_destination is set to the undercloud registry namespace using the hostname, which is the recommended method. If set to false , the push to a local registry does not occur and nodes pull images directly from the source. If set to a custom value, director pushes images to an external local registry. If you set this parameter to false in production environments while pulling images directly from Red Hat Container Catalog, all overcloud nodes will simultaneously pull the images from the Red Hat Container Catalog over your external connection, which can cause bandwidth issues. Only use false to pull directly from a Red Hat Satellite Server hosting the container images. If the push_destination parameter is set to false or is not defined and the remote registry requires authentication, set the ContainerImageRegistryLogin parameter to true and include the credentials with the ContainerImageRegistryCredentials parameter. pull_source The source registry from where to pull the original container images. set A dictionary of key: value definitions that define where to obtain the initial images. tag_from_label Use the value of specified container image metadata labels to create a tag for every image and pull that tagged image. For example, if you set tag_from_label: {version}-{release} , director uses the version and release labels to construct a new tag. For one container, version might be set to 16.2.3 and release might be set to 5.161 , which results in the tag 16.2.3-5.161. Director uses this parameter only if you have not defined tag in the set dictionary. Important When you push images to the undercloud, use push_destination: true instead of push_destination: UNDERCLOUD_IP:PORT . The push_destination: true method provides a level of consistency across both IPv4 and IPv6 addresses. The set parameter accepts a set of key: value definitions: Key Description ceph_image The name of the Ceph Storage container image. ceph_namespace The namespace of the Ceph Storage container image. ceph_tag The tag of the Ceph Storage container image. ceph_alertmanager_image ceph_alertmanager_namespace ceph_alertmanager_tag The name, namespace, and tag of the Ceph Storage Alert Manager container image. ceph_grafana_image ceph_grafana_namespace ceph_grafana_tag The name, namespace, and tag of the Ceph Storage Grafana container image. ceph_node_exporter_image ceph_node_exporter_namespace ceph_node_exporter_tag The name, namespace, and tag of the Ceph Storage Node Exporter container image. ceph_prometheus_image ceph_prometheus_namespace ceph_prometheus_tag The name, namespace, and tag of the Ceph Storage Prometheus container image. name_prefix A prefix for each OpenStack service image. name_suffix A suffix for each OpenStack service image. namespace The namespace for each OpenStack service image. neutron_driver The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard neutron-server container. Set to ovn to use OVN-based containers. tag Sets a specific tag for all images from the source. If not defined, director uses the Red Hat OpenStack Platform version number as the default value. This parameter takes precedence over the tag_from_label value. Note The container images use multi-stream tags based on the Red Hat OpenStack Platform version. This means that there is no longer a latest tag. 2.3. Guidelines for container image tagging The Red Hat Container Registry uses a specific version format to tag all Red Hat OpenStack Platform container images. This format follows the label metadata for each container, which is version-release . version Corresponds to a major and minor version of Red Hat OpenStack Platform. These versions act as streams that contain one or more releases. release Corresponds to a release of a specific container image version within a version stream. For example, if the latest version of Red Hat OpenStack Platform is 16.2.3 and the release for the container image is 5.161 , then the resulting tag for the container image is 16.2.3-5.161. The Red Hat Container Registry also uses a set of major and minor version tags that link to the latest release for that container image version. For example, both 16.2 and 16.2.3 link to the latest release in the 16.2.3 container stream. If a new minor release of 16.2 occurs, the 16.2 tag links to the latest release for the new minor release stream while the 16.2.3 tag continues to link to the latest release within the 16.2.3 stream. The ContainerImagePrepare parameter contains two sub-parameters that you can use to determine which container image to download. These sub-parameters are the tag parameter within the set dictionary, and the tag_from_label parameter. Use the following guidelines to determine whether to use tag or tag_from_label . The default value for tag is the major version for your OpenStack Platform version. For this version it is 16.2. This always corresponds to the latest minor version and release. To change to a specific minor version for OpenStack Platform container images, set the tag to a minor version. For example, to change to 16.2.2, set tag to 16.2.2. When you set tag , director always downloads the latest container image release for the version set in tag during installation and updates. If you do not set tag , director uses the value of tag_from_label in conjunction with the latest major version. The tag_from_label parameter generates the tag from the label metadata of the latest container image release it inspects from the Red Hat Container Registry. For example, the labels for a certain container might use the following version and release metadata: The default value for tag_from_label is {version}-{release} , which corresponds to the version and release metadata labels for each container image. For example, if a container image has 16.2.3 set for version and 5.161 set for release , the resulting tag for the container image is 16.2.3-5.161. The tag parameter always takes precedence over the tag_from_label parameter. To use tag_from_label , omit the tag parameter from your container preparation configuration. A key difference between tag and tag_from_label is that director uses tag to pull an image only based on major or minor version tags, which the Red Hat Container Registry links to the latest image release within a version stream, while director uses tag_from_label to perform a metadata inspection of each container image so that director generates a tag and pulls the corresponding image. 2.4. Obtaining container images from private registries The registry.redhat.io registry requires authentication to access and pull images. To authenticate with registry.redhat.io and other private registries, include the ContainerImageRegistryCredentials and ContainerImageRegistryLogin parameters in your containers-prepare-parameter.yaml file. ContainerImageRegistryCredentials Some container image registries require authentication to access images. In this situation, use the ContainerImageRegistryCredentials parameter in your containers-prepare-parameter.yaml environment file. The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry URL. Each private registry URL uses its own key and value pair to define the username (key) and password (value). This provides a method to specify credentials for multiple private registries. In the example, replace my_username and my_password with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. To specify authentication details for multiple registries, set multiple key-pair values for each registry in ContainerImageRegistryCredentials : Important The default ContainerImagePrepare parameter pulls container images from registry.redhat.io , which requires authentication. For more information, see Red Hat Container Registry Authentication . ContainerImageRegistryLogin The ContainerImageRegistryLogin parameter is used to control whether an overcloud node system needs to log in to the remote registry to fetch the container images. This situation occurs when you want the overcloud nodes to pull images directly, rather than use the undercloud to host images. You must set ContainerImageRegistryLogin to true if push_destination is set to false or not used for a given strategy. However, if the overcloud nodes do not have network connectivity to the registry hosts defined in ContainerImageRegistryCredentials and you set ContainerImageRegistryLogin to true , the deployment might fail when trying to perform a login. If the overcloud nodes do not have network connectivity to the registry hosts defined in the ContainerImageRegistryCredentials , set push_destination to true and ContainerImageRegistryLogin to false so that the overcloud nodes pull images from the undercloud. 2.5. Layering image preparation entries The value of the ContainerImagePrepare parameter is a YAML list. This means that you can specify multiple entries. The following example demonstrates two entries where director uses the latest version of all images except for the nova-api image, which uses the version tagged with 16.2.1-hotfix : The includes and excludes parameters use regular expressions to control image filtering for each entry. The images that match the includes strategy take precedence over excludes matches. The image name must match the includes or excludes regular expression value to be considered a match. A similar technique is used if your Block Storage (cinder) driver requires a vendor supplied cinder-volume image known as a plugin. If your Block Storage driver requires a plugin, see Deploying a vendor plugin in the Advanced Overcloud Customization guide. 2.6. Modifying images during preparation It is possible to modify images during image preparation, and then immediately deploy the overcloud with modified images. Note Red Hat OpenStack Platform (RHOSP) director supports modifying images during preparation for RHOSP containers, not for Ceph containers. Scenarios for modifying images include: As part of a continuous integration pipeline where images are modified with the changes being tested before deployment. As part of a development workflow where local changes must be deployed for testing and development. When changes must be deployed but are not available through an image build pipeline. For example, adding proprietary add-ons or emergency fixes. To modify an image during preparation, invoke an Ansible role on each image that you want to modify. The role takes a source image, makes the requested changes, and tags the result. The prepare command can push the image to the destination registry and set the heat parameters to refer to the modified image. The Ansible role tripleo-modify-image conforms with the required role interface and provides the behaviour necessary for the modify use cases. Control the modification with the modify-specific keys in the ContainerImagePrepare parameter: modify_role specifies the Ansible role to invoke for each image to modify. modify_append_tag appends a string to the end of the source image tag. This makes it obvious that the resulting image has been modified. Use this parameter to skip modification if the push_destination registry already contains the modified image. Change modify_append_tag whenever you modify the image. modify_vars is a dictionary of Ansible variables to pass to the role. To select a use case that the tripleo-modify-image role handles, set the tasks_from variable to the required file in that role. While developing and testing the ContainerImagePrepare entries that modify images, run the image prepare command without any additional options to confirm that the image is modified as you expect: Important To use the openstack tripleo container image prepare command, your undercloud must contain a running image-serve registry. As a result, you cannot run this command before a new undercloud installation because the image-serve registry will not be installed. You can run this command after a successful undercloud installation. 2.7. Updating existing packages on container images Note Red Hat OpenStack Platform (RHOSP) director supports updating existing packages on container images for RHOSP containers, not for Ceph containers. Procedure The following example ContainerImagePrepare entry updates in all packages on the container images by using the dnf repository configuration of the undercloud host: 2.8. Installing additional RPM files to container images You can install a directory of RPM files in your container images. This is useful for installing hotfixes, local package builds, or any package that is not available through a package repository. Note Red Hat OpenStack Platform (RHOSP) director supports installing additional RPM files to container images for RHOSP containers, not for Ceph containers. Note When you modify container images in existing deployments, you must then perform a minor update to apply the changes to your overcloud. For more information, see Keeping Red Hat OpenStack Platform Updated . Procedure The following example ContainerImagePrepare entry installs some hotfix packages on only the nova-compute image: 2.9. Modifying container images with a custom Dockerfile You can specify a directory that contains a Dockerfile to make the required changes. When you invoke the tripleo-modify-image role, the role generates a Dockerfile.modified file that changes the FROM directive and adds extra LABEL directives. Note Red Hat OpenStack Platform (RHOSP) director supports modifying container images with a custom Dockerfile for RHOSP containers, not for Ceph containers. Procedure The following example runs the custom Dockerfile on the nova-compute image: The following example shows the /home/stack/nova-custom/Dockerfile file. After you run any USER root directives, you must switch back to the original image default user: 2.10. Preparing a Satellite server for container images Red Hat Satellite 6 offers registry synchronization capabilities. This provides a method to pull multiple images into a Satellite server and manage them as part of an application life cycle. The Satellite also acts as a registry for other container-enabled systems to use. For more information about managing container images, see Managing Container Images in the Red Hat Satellite 6 Content Management Guide . The examples in this procedure use the hammer command line tool for Red Hat Satellite 6 and an example organization called ACME . Substitute this organization for your own Satellite 6 organization. Note This procedure requires authentication credentials to access container images from registry.redhat.io . Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. For more information, see "Red Hat Container Registry Authentication" . Procedure Create a list of all container images: If you plan to install Ceph and enable the Ceph Dashboard, you need the following ose-prometheus containers: Copy the satellite_images file to a system that contains the Satellite 6 hammer tool. Alternatively, use the instructions in the Hammer CLI Guide to install the hammer tool to the undercloud. Run the following hammer command to create a new product ( OSP Containers ) in your Satellite organization: This custom product will contain your images. Add the overcloud container images from the satellite_images file: Add the Ceph Storage 4 container image: Note If you want to install the Ceph dashboard, include --name rhceph-4-dashboard-rhel8 in the hammer repository create command: Synchronize the container images: Wait for the Satellite server to complete synchronization. Note Depending on your configuration, hammer might ask for your Satellite server username and password. You can configure hammer to automatically login using a configuration file. For more information, see the Authentication section in the Hammer CLI Guide . If your Satellite 6 server uses content views, create a new content view version to incorporate the images and promote it along environments in your application life cycle. This largely depends on how you structure your application lifecycle. For example, if you have an environment called production in your lifecycle and you want the container images to be available in that environment, create a content view that includes the container images and promote that content view to the production environment. For more information, see Managing Content Views . Check the available tags for the base image: This command displays tags for the OpenStack Platform container images within a content view for a particular environment. Return to the undercloud and generate a default environment file that prepares images using your Satellite server as a source. Run the following example command to generate the environment file: --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images for the undercloud. In this case, the name of the file is containers-prepare-parameter.yaml . Edit the containers-prepare-parameter.yaml file and modify the following parameters: push_destination - Set this to true or false depending on your chosen container image management strategy. If you set this parameter to false , the overcloud nodes pull images directly from the Satellite. If you set this parameter to true , the director pulls the images from the Satellite to the undercloud registry and the overcloud pulls the images from the undercloud registry. namespace - The URL and port of the registry on the Satellite server. The default registry port on Red Hat Satellite is 443. name_prefix - The prefix is based on a Satellite 6 convention. This differs depending on whether you use content views: If you use content views, the structure is [org]-[environment]-[content view]-[product]- . For example: acme-production-myosp16-osp_containers- . If you do not use content views, the structure is [org]-[product]- . For example: acme-osp_containers- . ceph_namespace , ceph_image , ceph_tag - If you use Ceph Storage, include these additional parameters to define the Ceph Storage container image location. Note that ceph_image now includes a Satellite-specific prefix. This prefix is the same value as the name_prefix option. The following example environment file contains Satellite-specific parameters: Note To use a specific container image version stored on your Red Hat Satellite Server, set the tag key-value pair to the specific version in the set dictionary. For example, to use the 16.2.2 image stream, set tag: 16.2.2 in the set dictionary. You must define the containers-prepare-parameter.yaml environment file in the undercloud.conf configuration file, otherwise the undercloud uses the default values:
|
[
"sudo openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml",
"parameter_defaults: ContainerImagePrepare: - (strategy one) - (strategy two) - (strategy three)",
"parameter_defaults: ContainerImagePrepare: - set: tag: 16.2",
"parameter_defaults: ContainerImagePrepare: - set: tag: 16.2.2",
"parameter_defaults: ContainerImagePrepare: - set: # tag: 16.2 tag_from_label: '{version}-{release}'",
"\"Labels\": { \"release\": \"5.161\", \"version\": \"16.2.3\", }",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ - push_destination: true set: namespace: registry.internalsite.com/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' registry.internalsite.com: myuser2: '0th3rp@55w0rd!' '192.0.2.1:8787': myuser3: '@n0th3rp@55w0rd!'",
"parameter_defaults: ContainerImagePrepare: - push_destination: false set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' ContainerImageRegistryLogin: true",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' ContainerImageRegistryLogin: false",
"parameter_defaults: ContainerImagePrepare: - tag_from_label: \"{version}-{release}\" push_destination: true excludes: - nova-api set: namespace: registry.redhat.io/rhosp-rhel8 name_prefix: openstack- name_suffix: '' tag:16.2 - push_destination: true includes: - nova-api set: namespace: registry.redhat.io/rhosp-rhel8 tag: 16.2.1-hotfix",
"sudo openstack tripleo container image prepare -e ~/containers-prepare-parameter.yaml",
"ContainerImagePrepare: - push_destination: true modify_role: tripleo-modify-image modify_append_tag: \"-updated\" modify_vars: tasks_from: yum_update.yml compare_host_packages: true yum_repos_dir_path: /etc/yum.repos.d",
"ContainerImagePrepare: - push_destination: true includes: - nova-compute modify_role: tripleo-modify-image modify_append_tag: \"-hotfix\" modify_vars: tasks_from: rpm_install.yml rpms_path: /home/stack/nova-hotfix-pkgs",
"ContainerImagePrepare: - push_destination: true includes: - nova-compute modify_role: tripleo-modify-image modify_append_tag: \"-hotfix\" modify_vars: tasks_from: modify_image.yml modify_dir_path: /home/stack/nova-custom",
"FROM registry.redhat.io/rhosp-rhel8/openstack-nova-compute:latest USER \"root\" COPY customize.sh /tmp/ RUN /tmp/customize.sh USER \"nova\"",
"sudo podman search --limit 1000 \"registry.redhat.io/rhosp-rhel8/openstack\" --format=\"{{ .Name }}\" | sort > satellite_images sudo podman search --limit 1000 \"registry.redhat.io/rhceph\" | grep rhceph-4-dashboard-rhel8 sudo podman search --limit 1000 \"registry.redhat.io/rhceph\" | grep rhceph-4-rhel8 sudo podman search --limit 1000 \"registry.redhat.io/openshift\" | grep ose-prometheus",
"registry.redhat.io/openshift4/ose-prometheus-node-exporter:v4.6 registry.redhat.io/openshift4/ose-prometheus:v4.6 registry.redhat.io/openshift4/ose-prometheus-alertmanager:v4.6",
"hammer product create --organization \"ACME\" --name \"OSP Containers\"",
"while read IMAGE; do IMAGE_NAME=USD(echo USDIMAGE | cut -d\"/\" -f3 | sed \"s/openstack-//g\") ; IMAGE_NOURL=USD(echo USDIMAGE | sed \"s/registry.redhat.io\\///g\") ; hammer repository create --organization \"ACME\" --product \"OSP Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name USDIMAGE_NOURL --upstream-username USERNAME --upstream-password PASSWORD --name USDIMAGE_NAME ; done < satellite_images",
"hammer repository create --organization \"ACME\" --product \"OSP Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name rhceph/rhceph-4-rhel8 --upstream-username USERNAME --upstream-password PASSWORD --name rhceph-4-rhel8",
"hammer repository create --organization \"ACME\" --product \"OSP Containers\" --content-type docker --url https://registry.redhat.io --docker-upstream-name rhceph/rhceph-4-dashboard-rhel8 --upstream-username USERNAME --upstream-password PASSWORD --name rhceph-4-dashboard-rhel8",
"hammer product synchronize --organization \"ACME\" --name \"OSP Containers\"",
"hammer docker tag list --repository \"base\" --organization \"ACME\" --lifecycle-environment \"production\" --product \"OSP Containers\"",
"sudo openstack tripleo container image prepare default --output-env-file containers-prepare-parameter.yaml",
"parameter_defaults: ContainerImagePrepare: - push_destination: false set: ceph_image: acme-production-myosp16_1-osp_containers-rhceph-4 ceph_namespace: satellite.example.com:443 ceph_tag: latest name_prefix: acme-production-myosp16_1-osp_containers- name_suffix: '' namespace: satellite.example.com:443 neutron_driver: null tag: '16.2'",
"container_images_file = /home/stack/containers-prepare-parameter.yaml"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/transitioning_to_containerized_services/assembly_obtaining-and-modifying-container-images
|
Chapter 4. Remote health monitoring with connected clusters
|
Chapter 4. Remote health monitoring with connected clusters 4.1. About remote health monitoring OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. The data that is provided to Red Hat enables the benefits outlined in this document. A cluster that reports data to Red Hat through Telemetry and the Insights Operator is considered a connected cluster . Telemetry is the term that Red Hat uses to describe the information being sent to Red Hat by the OpenShift Container Platform Telemeter Client. Lightweight attributes are sent from connected clusters to Red Hat to enable subscription management automation, monitor the health of clusters, assist with support, and improve customer experience. The Insights Operator gathers OpenShift Container Platform configuration data and sends it to Red Hat. The data is used to produce insights about potential issues that a cluster might be exposed to. These insights are communicated to cluster administrators on OpenShift Cluster Manager . More information is provided in this document about these two processes. Telemetry and Insights Operator benefits Telemetry and the Insights Operator enable the following benefits for end-users: Enhanced identification and resolution of issues . Events that might seem normal to an end-user can be observed by Red Hat from a broader perspective across a fleet of clusters. Some issues can be more rapidly identified from this point of view and resolved without an end-user needing to open a support case or file a Jira issue . Advanced release management . OpenShift Container Platform offers the candidate , fast , and stable release channels, which enable you to choose an update strategy. The graduation of a release from fast to stable is dependent on the success rate of updates and on the events seen during upgrades. With the information provided by connected clusters, Red Hat can improve the quality of releases to stable channels and react more rapidly to issues found in the fast channels. Targeted prioritization of new features and functionality . The data collected provides insights about which areas of OpenShift Container Platform are used most. With this information, Red Hat can focus on developing the new features and functionality that have the greatest impact for our customers. A streamlined support experience . You can provide a cluster ID for a connected cluster when creating a support ticket on the Red Hat Customer Portal . This enables Red Hat to deliver a streamlined support experience that is specific to your cluster, by using the connected information. This document provides more information about that enhanced support experience. Predictive analytics . The insights displayed for your cluster on OpenShift Cluster Manager are enabled by the information collected from connected clusters. Red Hat is investing in applying deep learning, machine learning, and artificial intelligence automation to help identify issues that OpenShift Container Platform clusters are exposed to. 4.1.1. About Telemetry Telemetry sends a carefully chosen subset of the cluster monitoring metrics to Red Hat. The Telemeter Client fetches the metrics values every four minutes and thirty seconds and uploads the data to Red Hat. These metrics are described in this document. This stream of data is used by Red Hat to monitor the clusters in real-time and to react as necessary to problems that impact our customers. It also allows Red Hat to roll out OpenShift Container Platform upgrades to customers to minimize service impact and continuously improve the upgrade experience. This debugging information is available to Red Hat Support and Engineering teams with the same restrictions as accessing data reported through support cases. All connected cluster information is used by Red Hat to help make OpenShift Container Platform better and more intuitive to use. Additional resources See the OpenShift Container Platform update documentation for more information about updating or upgrading a cluster. 4.1.1.1. Information collected by Telemetry The following information is collected by Telemetry: 4.1.1.1.1. System information Version information, including the OpenShift Container Platform cluster version and installed update details that are used to determine update version availability Update information, including the number of updates available per cluster, the channel and image repository used for an update, update progress information, and the number of errors that occur in an update The unique random identifier that is generated during an installation Configuration details that help Red Hat Support to provide beneficial support for customers, including node configuration at the cloud infrastructure level, hostnames, IP addresses, Kubernetes pod names, namespaces, and services The OpenShift Container Platform framework components installed in a cluster and their condition and status Events for all namespaces listed as "related objects" for a degraded Operator Information about degraded software Information about the validity of certificates The name of the provider platform that OpenShift Container Platform is deployed on and the data center location 4.1.1.1.2. Sizing Information Sizing information about clusters, machine types, and machines, including the number of CPU cores and the amount of RAM used for each The number of etcd members and the number of objects stored in the etcd cluster Number of application builds by build strategy type 4.1.1.1.3. Usage information Usage information about components, features, and extensions Usage details about Technology Previews and unsupported configurations Telemetry does not collect identifying information such as usernames or passwords. Red Hat does not intend to collect personal information. If Red Hat discovers that personal information has been inadvertently received, Red Hat will delete such information. To the extent that any telemetry data constitutes personal data, please refer to the Red Hat Privacy Statement for more information about Red Hat's privacy practices. Additional resources See Showing data collected by Telemetry for details about how to list the attributes that Telemetry gathers from Prometheus in OpenShift Container Platform. See the upstream cluster-monitoring-operator source code for a list of the attributes that Telemetry gathers from Prometheus. Telemetry is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2. About the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report of each cluster in the Insights Advisor service on Red Hat Hybrid Cloud Console. If any issues have been identified, Insights provides further details and, if available, steps on how to solve a problem. The Insights Operator does not collect identifying information, such as user names, passwords, or certificates. See Red Hat Insights Data & Application Security for information about Red Hat Insights data collection and controls. Red Hat uses all connected cluster information to: Identify potential cluster issues and provide a solution and preventive actions in the Insights Advisor service on Red Hat Hybrid Cloud Console Improve OpenShift Container Platform by providing aggregated and critical information to product and support teams Make OpenShift Container Platform more intuitive Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . 4.1.2.1. Information collected by the Insights Operator The following information is collected by the Insights Operator: General information about your cluster and its components to identify issues that are specific to your OpenShift Container Platform version and environment. Configuration files, such as the image registry configuration, of your cluster to determine incorrect settings and issues that are specific to parameters you set. Errors that occur in the cluster components. Progress information of running updates, and the status of any component upgrades. Details of the platform that OpenShift Container Platform is deployed on and the region that the cluster is located in Cluster workload information transformed into discreet Secure Hash Algorithm (SHA) values, which allows Red Hat to assess workloads for security and version vulnerabilities without disclosing sensitive details. Workload information about the operating system and runtime environment, including runtime kinds, names, and versions, which you can optionally enable through the InsightsRuntimeExtractor feature gate. This data gives Red Hat a better understanding of how you use OpenShift Container Platform containers so that we can proactively help you make investment decisions to drive optimal utilization. Important InsightsRuntimeExtractor is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . If an Operator reports an issue, information is collected about core OpenShift Container Platform pods in the openshift-* and kube-* projects. This includes state, resource, security context, volume information, and more. Additional resources See Showing data collected by the Insights Operator for details about how to review the data that is collected by the Insights Operator. What data is being collected by the Insights Operator in OpenShift? Enabling features using feature gates The Insights Operator source code is available for review and contribution. See the Insights Operator upstream project for a list of the items collected by the Insights Operator. 4.1.3. Understanding Telemetry and Insights Operator data flow The Telemeter Client collects selected time series data from the Prometheus API. The time series data is uploaded to api.openshift.com every four minutes and thirty seconds for processing. The Insights Operator gathers selected data from the Kubernetes API and the Prometheus API into an archive. The archive is uploaded to OpenShift Cluster Manager every two hours for processing. The Insights Operator also downloads the latest Insights analysis from OpenShift Cluster Manager . This is used to populate the Insights status pop-up that is included in the Overview page in the OpenShift Container Platform web console. All of the communication with Red Hat occurs over encrypted channels by using Transport Layer Security (TLS) and mutual certificate authentication. All of the data is encrypted in transit and at rest. Access to the systems that handle customer data is controlled through multi-factor authentication and strict authorization controls. Access is granted on a need-to-know basis and is limited to required operations. Telemetry and Insights Operator data flow Additional resources See About OpenShift Container Platform monitoring for more information about the OpenShift Container Platform monitoring stack. See Configuring your firewall for details about configuring a firewall and enabling endpoints for Telemetry and Insights 4.1.4. Additional details about how remote health monitoring data is used The information collected to enable remote health monitoring is detailed in Information collected by Telemetry and Information collected by the Insights Operator . As further described in the preceding sections of this document, Red Hat collects data about your use of the Red Hat Product(s) for purposes such as providing support and upgrades, optimizing performance or configuration, minimizing service impacts, identifying and remediating threats, troubleshooting, improving the offerings and user experience, responding to issues, and for billing purposes if applicable. Collection safeguards Red Hat employs technical and organizational measures designed to protect the telemetry and configuration data. Sharing Red Hat may share the data collected through Telemetry and the Insights Operator internally within Red Hat to improve your user experience. Red Hat may share telemetry and configuration data with its business partners in an aggregated form that does not identify customers to help the partners better understand their markets and their customers' use of Red Hat offerings or to ensure the successful integration of products jointly supported by those partners. Third parties Red Hat may engage certain third parties to assist in the collection, analysis, and storage of the Telemetry and configuration data. User control / enabling and disabling telemetry and configuration data collection You may disable OpenShift Container Platform Telemetry and the Insights Operator by following the instructions in Opting out of remote health reporting . 4.2. Showing data collected by remote health monitoring As an administrator, you can review the metrics collected by Telemetry and the Insights Operator. 4.2.1. Showing data collected by Telemetry You can view the cluster and components time series data captured by Telemetry. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have access to the cluster as a user with the cluster-admin role or the cluster-monitoring-view role. Procedure Log in to a cluster. Run the following command, which queries a cluster's Prometheus service and returns the full set of time series data captured by Telemetry: Note The following example contains some values that are specific to OpenShift Container Platform on AWS. USD curl -G -k -H "Authorization: Bearer USD(oc whoami -t)" \ https://USD(oc get route prometheus-k8s-federate -n \ openshift-monitoring -o jsonpath="{.spec.host}")/federate \ --data-urlencode 'match[]={__name__=~"cluster:usage:.*"}' \ --data-urlencode 'match[]={__name__="count:up0"}' \ --data-urlencode 'match[]={__name__="count:up1"}' \ --data-urlencode 'match[]={__name__="cluster_version"}' \ --data-urlencode 'match[]={__name__="cluster_version_available_updates"}' \ --data-urlencode 'match[]={__name__="cluster_version_capability"}' \ --data-urlencode 'match[]={__name__="cluster_operator_up"}' \ --data-urlencode 'match[]={__name__="cluster_operator_conditions"}' \ --data-urlencode 'match[]={__name__="cluster_version_payload"}' \ --data-urlencode 'match[]={__name__="cluster_installer"}' \ --data-urlencode 'match[]={__name__="cluster_infrastructure_provider"}' \ --data-urlencode 'match[]={__name__="cluster_feature_set"}' \ --data-urlencode 'match[]={__name__="instance:etcd_object_counts:sum"}' \ --data-urlencode 'match[]={__name__="ALERTS",alertstate="firing"}' \ --data-urlencode 'match[]={__name__="code:apiserver_request_total:rate:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:capacity_memory_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="cluster:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="openshift:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="openshift:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="workload:cpu_usage_cores:sum"}' \ --data-urlencode 'match[]={__name__="workload:memory_usage_bytes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:virt_platform_nodes:sum"}' \ --data-urlencode 'match[]={__name__="cluster:node_instance_type_count:sum"}' \ --data-urlencode 'match[]={__name__="cnv:vmi_status_running:count"}' \ --data-urlencode 'match[]={__name__="cluster:vmi_request_cpu_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_cores:sum"}' \ --data-urlencode 'match[]={__name__="node_role_os_version_machine:cpu_capacity_sockets:sum"}' \ --data-urlencode 'match[]={__name__="subscription_sync_total"}' \ --data-urlencode 'match[]={__name__="olm_resolution_duration_seconds"}' \ --data-urlencode 'match[]={__name__="csv_succeeded"}' \ --data-urlencode 'match[]={__name__="csv_abnormal"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kubelet_volume_stats_used_bytes:provisioner:sum"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_cluster_total_used_raw_bytes"}' \ --data-urlencode 'match[]={__name__="ceph_health_status"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_total_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_raw_capacity_used_bytes"}' \ --data-urlencode 'match[]={__name__="odf_system_health_status"}' \ --data-urlencode 'match[]={__name__="job:ceph_osd_metadata:count"}' \ --data-urlencode 'match[]={__name__="job:kube_pv:count"}' \ --data-urlencode 'match[]={__name__="job:odf_system_pvs:count"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_pools_iops_bytes:total"}' \ --data-urlencode 'match[]={__name__="job:ceph_versions_running:count"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_unhealthy_buckets:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_bucket_count:sum"}' \ --data-urlencode 'match[]={__name__="job:noobaa_total_object_count:sum"}' \ --data-urlencode 'match[]={__name__="odf_system_bucket_count", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="odf_system_objects_total", system_type="OCS", system_vendor="Red Hat"}' \ --data-urlencode 'match[]={__name__="noobaa_accounts_num"}' \ --data-urlencode 'match[]={__name__="noobaa_total_usage"}' \ --data-urlencode 'match[]={__name__="console_url"}' \ --data-urlencode 'match[]={__name__="cluster:ovnkube_master_egress_routing_via_host:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_instances:max"}' \ --data-urlencode 'match[]={__name__="cluster:network_attachment_definition_enabled_instance_up:max"}' \ --data-urlencode 'match[]={__name__="cluster:ingress_controller_aws_nlb_active:sum"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:min"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:max"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:avg"}' \ --data-urlencode 'match[]={__name__="cluster:route_metrics_controller_routes_per_shard:median"}' \ --data-urlencode 'match[]={__name__="cluster:openshift_route_info:tls_termination:sum"}' \ --data-urlencode 'match[]={__name__="insightsclient_request_send_total"}' \ --data-urlencode 'match[]={__name__="cam_app_workload_migrations"}' \ --data-urlencode 'match[]={__name__="cluster:apiserver_current_inflight_requests:sum:max_over_time:2m"}' \ --data-urlencode 'match[]={__name__="cluster:alertmanager_integrations:max"}' \ --data-urlencode 'match[]={__name__="cluster:telemetry_selected_series:count"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_series:sum"}' \ --data-urlencode 'match[]={__name__="openshift:prometheus_tsdb_head_samples_appended_total:sum"}' \ --data-urlencode 'match[]={__name__="monitoring:container_memory_working_set_bytes:sum"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_series_added:topk3_sum1h"}' \ --data-urlencode 'match[]={__name__="namespace_job:scrape_samples_post_metric_relabeling:topk3"}' \ --data-urlencode 'match[]={__name__="monitoring:haproxy_server_http_responses_total:sum"}' \ --data-urlencode 'match[]={__name__="rhmi_status"}' \ --data-urlencode 'match[]={__name__="status:upgrading:version:rhoam_state:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_critical_alerts:max"}' \ --data-urlencode 'match[]={__name__="state:rhoam_warning_alerts:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_percentile:max"}' \ --data-urlencode 'match[]={__name__="rhoam_7d_slo_remaining_error_budget:max"}' \ --data-urlencode 'match[]={__name__="cluster_legacy_scheduler_policy"}' \ --data-urlencode 'match[]={__name__="cluster_master_schedulable"}' \ --data-urlencode 'match[]={__name__="che_workspace_status"}' \ --data-urlencode 'match[]={__name__="che_workspace_started_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_failure_total"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_sum"}' \ --data-urlencode 'match[]={__name__="che_workspace_start_time_seconds_count"}' \ --data-urlencode 'match[]={__name__="cco_credentials_mode"}' \ --data-urlencode 'match[]={__name__="cluster:kube_persistentvolume_plugin_type_counts:sum"}' \ --data-urlencode 'match[]={__name__="visual_web_terminal_sessions_total"}' \ --data-urlencode 'match[]={__name__="acm_managed_cluster_info"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_vcenter_info:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_esxi_version_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:vsphere_node_hw_version_total:sum"}' \ --data-urlencode 'match[]={__name__="openshift:build_by_strategy:sum"}' \ --data-urlencode 'match[]={__name__="rhods_aggregate_availability"}' \ --data-urlencode 'match[]={__name__="rhods_total_users"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum"}' \ --data-urlencode 'match[]={__name__="instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile",quantile="0.99"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_storage_types"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_strategies"}' \ --data-urlencode 'match[]={__name__="jaeger_operator_instances_agent_strategies"}' \ --data-urlencode 'match[]={__name__="appsvcs:cores_by_product:sum"}' \ --data-urlencode 'match[]={__name__="nto_custom_profiles:count"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_configmap"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_secret"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_failures_total"}' \ --data-urlencode 'match[]={__name__="openshift_csi_share_mount_requests_total"}' \ --data-urlencode 'match[]={__name__="cluster:velero_backup_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:velero_restore_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_storage_info"}' \ --data-urlencode 'match[]={__name__="eo_es_redundancy_policy_info"}' \ --data-urlencode 'match[]={__name__="eo_es_defined_delete_namespaces_total"}' \ --data-urlencode 'match[]={__name__="eo_es_misconfigured_memory_resources_info"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_data_nodes_total:max"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_created_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:eo_es_documents_deleted_total:sum"}' \ --data-urlencode 'match[]={__name__="pod:eo_es_shards_total:max"}' \ --data-urlencode 'match[]={__name__="eo_es_cluster_management_state_info"}' \ --data-urlencode 'match[]={__name__="imageregistry:imagestreamtags_count:sum"}' \ --data-urlencode 'match[]={__name__="imageregistry:operations_count:sum"}' \ --data-urlencode 'match[]={__name__="log_logging_info"}' \ --data-urlencode 'match[]={__name__="log_collector_error_count_total"}' \ --data-urlencode 'match[]={__name__="log_forwarder_pipeline_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_input_info"}' \ --data-urlencode 'match[]={__name__="log_forwarder_output_info"}' \ --data-urlencode 'match[]={__name__="cluster:log_collected_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:log_logged_bytes_total:sum"}' \ --data-urlencode 'match[]={__name__="cluster:kata_monitor_running_shim_count:sum"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_hostedclusters:max"}' \ --data-urlencode 'match[]={__name__="platform:hypershift_nodepools:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_bucket_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_buckets_claims:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_resources:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_unhealthy_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_namespace_buckets:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_accounts:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_usage:max"}' \ --data-urlencode 'match[]={__name__="namespace:noobaa_system_health_status:max"}' \ --data-urlencode 'match[]={__name__="ocs_advanced_feature_usage"}' \ --data-urlencode 'match[]={__name__="os_image_url_override:sum"}' \ --data-urlencode 'match[]={__name__="openshift:openshift_network_operator_ipsec_state:info"}' 4.2.2. Showing data collected by the Insights Operator You can review the data that is collected by the Insights Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Find the name of the currently running pod for the Insights Operator: USD INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running) Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data The recent Insights Operator archives are now available in the insights-data directory. 4.3. Opting out of remote health reporting You may choose to opt out of reporting health and usage data for your cluster. To opt out of remote health reporting, you must: Modify the global cluster pull secret to disable remote health reporting. Update the cluster to use this modified pull secret. 4.3.1. Consequences of disabling remote health reporting In OpenShift Container Platform, customers can opt out of reporting usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, as well as better understand how product upgrades impact clusters. Connected clusters also help to simplify the subscription and entitlement process and enable the OpenShift Cluster Manager service to provide an overview of your clusters and their subscription status. Red Hat strongly recommends leaving health and usage reporting enabled for pre-production and test clusters even if it is necessary to opt out for production clusters. This allows Red Hat to be a participant in qualifying OpenShift Container Platform in your environments and react more rapidly to product issues. Some of the consequences of opting out of having a connected cluster are: Red Hat will not be able to monitor the success of product upgrades or the health of your clusters without a support case being opened. Red Hat will not be able to use configuration data to better triage customer support cases and identify which configurations our customers find important. The OpenShift Cluster Manager will not show data about your clusters including health and usage information. Your subscription entitlement information must be manually entered via console.redhat.com without the benefit of automatic usage reporting. In restricted networks, Telemetry and Insights data can still be reported through appropriate configuration of your proxy. 4.3.2. Modifying the global cluster pull secret to disable remote health reporting You can modify your existing global cluster pull secret to disable remote health reporting. This disables both Telemetry and the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Download the global cluster pull secret to your local file system. USD oc extract secret/pull-secret -n openshift-config --to=. In a text editor, edit the .dockerconfigjson file that was downloaded. Remove the cloud.openshift.com JSON entry, for example: "cloud.openshift.com":{"auth":"<hash>","email":"<email_address>"} Save the file. You can now update your cluster to use this modified pull secret. 4.3.3. Registering your disconnected cluster Register your disconnected OpenShift Container Platform cluster on the Red Hat Hybrid Cloud Console so that your cluster is not impacted by the consequences listed in the section named "Consequences of disabling remote health reporting". Important By registering your disconnected cluster, you can continue to report your subscription usage to Red Hat. In turn, Red Hat can return accurate usage and capacity trends associated with your subscription, so that you can use the returned information to better organize subscription allocations across all of your resources. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . You can log in to the Red Hat Hybrid Cloud Console. Procedure Go to the Register disconnected cluster web page on the Red Hat Hybrid Cloud Console. Optional: To access the Register disconnected cluster web page from the home page of the Red Hat Hybrid Cloud Console, go to the Cluster List navigation menu item and then select the Register cluster button. Enter your cluster's details in the provided fields on the Register disconnected cluster page. From the Subscription settings section of the page, select the subcription settings that apply to your Red Hat subscription offering. To register your disconnected cluster, select the Register cluster button. Additional resources Consequences of disabling remote health reporting How does the subscriptions service show my subscription data? (Getting Started with the Subscription Service) 4.3.4. Updating the global cluster pull secret You can update the global pull secret for your cluster by either replacing the current pull secret or appending a new pull secret. The procedure is required when users use a separate registry to store images than the registry used during installation. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Optional: To append a new pull secret to the existing pull secret, complete the following steps: Enter the following command to download the pull secret: USD oc get secret/pull-secret -n openshift-config \ --template='{{index .data ".dockerconfigjson" | base64decode}}' \ ><pull_secret_location> 1 1 Provide the path to the pull secret file. Enter the following command to add the new pull secret: USD oc registry login --registry="<registry>" \ 1 --auth-basic="<username>:<password>" \ 2 --to=<pull_secret_location> 3 1 Provide the new registry. You can include multiple repositories within the same registry, for example: --registry="<registry/my-namespace/my-repository>" . 2 Provide the credentials of the new registry. 3 Provide the path to the pull secret file. Alternatively, you can perform a manual update to the pull secret file. Enter the following command to update the global pull secret for your cluster: USD oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=<pull_secret_location> 1 1 Provide the path to the new pull secret file. This update is rolled out to all nodes, which can take some time depending on the size of your cluster. Note As of OpenShift Container Platform 4.7.4, changes to the global pull secret no longer trigger a node drain or reboot. 4.4. Enabling remote health reporting If you or your organization have disabled remote health reporting, you can enable this feature again. You can see that remote health reporting is disabled from the message "Insights not available" in the Status tile on the OpenShift Container Platform Web Console Overview page. To enable remote health reporting, you must Modify the global cluster pull secret with a new authorization token. Note Enabling remote health reporting enables both Insights Operator and Telemetry. 4.4.1. Modifying your global cluster pull secret to enable remote health reporting You can modify your existing global cluster pull secret to enable remote health reporting. If you have previously disabled remote health monitoring, you must first download a new pull secret with your console.openshift.com access token from Red Hat OpenShift Cluster Manager. Prerequisites Access to the cluster as a user with the cluster-admin role. Access to OpenShift Cluster Manager. Procedure Navigate to https://console.redhat.com/openshift/downloads . From Tokens Pull Secret , click Download . The file pull-secret.txt containing your cloud.openshift.com access token in JSON format downloads: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": " <email_address> " } } } Download the global cluster pull secret to your local file system. USD oc get secret/pull-secret -n openshift-config \ --template='{{index .data ".dockerconfigjson" | base64decode}}' \ > pull-secret Make a backup copy of your pull secret. USD cp pull-secret pull-secret-backup Open the pull-secret file in a text editor. Append the cloud.openshift.com JSON entry from pull-secret.txt into auths . Save the file. Update the secret in your cluster. USD oc set data secret/pull-secret -n openshift-config \ --from-file=.dockerconfigjson=pull-secret It may take several minutes for the secret to update and your cluster to begin reporting. Verification Navigate to the OpenShift Container Platform Web Console Overview page. Insights in the Status tile reports the number of issues found. 4.5. Using Insights to identify issues with your cluster Insights repeatedly analyzes the data Insights Operator sends. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. 4.5.1. About Red Hat Insights Advisor for OpenShift Container Platform You can use Insights Advisor to assess and monitor the health of your OpenShift Container Platform clusters. Whether you are concerned about individual clusters, or with your whole infrastructure, it is important to be aware of the exposure of your cluster infrastructure to issues that can affect service availability, fault tolerance, performance, or security. Using cluster data collected by the Insights Operator, Insights repeatedly compares that data against a library of recommendations . Each recommendation is a set of cluster-environment conditions that can leave OpenShift Container Platform clusters at risk. The results of the Insights analysis are available in the Insights Advisor service on Red Hat Hybrid Cloud Console. In the Console, you can perform the following actions: See clusters impacted by a specific recommendation. Use robust filtering capabilities to refine your results to those recommendations. Learn more about individual recommendations, details about the risks they present, and get resolutions tailored to your individual clusters. Share results with other stakeholders. 4.5.2. Understanding Insights Advisor recommendations Insights Advisor bundles information about various cluster states and component configurations that can negatively affect the service availability, fault tolerance, performance, or security of your clusters. This information set is called a recommendation in Insights Advisor and includes the following information: Name: A concise description of the recommendation Added: When the recommendation was published to the Insights Advisor archive Category: Whether the issue has the potential to negatively affect service availability, fault tolerance, performance, or security Total risk: A value derived from the likelihood that the condition will negatively affect your infrastructure, and the impact on operations if that were to happen Clusters: A list of clusters on which a recommendation is detected Description: A brief synopsis of the issue, including how it affects your clusters Link to associated topics: More information from Red Hat about the issue 4.5.3. Displaying potential issues with your cluster This section describes how to display the Insights report in Insights Advisor on OpenShift Cluster Manager . Note that Insights repeatedly analyzes your cluster and shows the latest results. These results can change, for example, if you fix an issue or a new issue has been detected. Prerequisites Your cluster is registered on OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Depending on the result, Insights Advisor displays one of the following: No matching recommendations found , if Insights did not identify any issues. A list of issues Insights has detected, grouped by risk (low, moderate, important, and critical). No clusters yet , if Insights has not yet analyzed the cluster. The analysis starts shortly after the cluster has been installed, registered, and connected to the internet. If any issues are displayed, click the > icon in front of the entry for more details. Depending on the issue, the details can also contain a link to more information from Red Hat about the issue. 4.5.4. Displaying all Insights Advisor recommendations The Recommendations view, by default, only displays the recommendations that are detected on your clusters. However, you can view all of the recommendations in the advisor archive. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on Red Hat Hybrid Cloud Console. You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Click the X icons to the Clusters Impacted and Status filters. You can now browse through all of the potential recommendations for your cluster. 4.5.5. Advisor recommendation filters The Insights advisor service can return a large number of recommendations. To focus on your most critical recommendations, you can apply filters to the Advisor recommendations list to remove low-priority recommendations. By default, filters are set to only show enabled recommendations that are impacting one or more clusters. To view all or disabled recommendations in the Insights library, you can customize the filters. To apply a filter, select a filter type and then set its value based on the options that are available in the drop-down list. You can apply multiple filters to the list of recommendations. You can set the following filter types: Name: Search for a recommendation by name. Total risk: Select one or more values from Critical , Important , Moderate , and Low indicating the likelihood and the severity of a negative impact on a cluster. Impact: Select one or more values from Critical , High , Medium , and Low indicating the potential impact to the continuity of cluster operations. Likelihood: Select one or more values from Critical , High , Medium , and Low indicating the potential for a negative impact to a cluster if the recommendation comes to fruition. Category: Select one or more categories from Service Availability , Performance , Fault Tolerance , Security , and Best Practice to focus your attention on. Status: Click a radio button to show enabled recommendations (default), disabled recommendations, or all recommendations. Clusters impacted: Set the filter to show recommendations currently impacting one or more clusters, non-impacting recommendations, or all recommendations. Risk of change: Select one or more values from High , Moderate , Low , and Very low indicating the risk that the implementation of the resolution could have on cluster operations. 4.5.5.1. Filtering Insights advisor recommendations As an OpenShift Container Platform cluster manager, you can filter the recommendations that are displayed on the recommendations list. By applying filters, you can reduce the number of reported recommendations and concentrate on your highest priority recommendations. The following procedure demonstrates how to set and remove Category filters; however, the procedure is applicable to any of the filter types and respective values. Prerequisites You are logged in to the OpenShift Cluster Manager Hybrid Cloud Console . Procedure Go to Red Hat Hybrid Cloud Console OpenShift Advisor recommendations . In the main, filter-type drop-down list, select the Category filter type. Expand the filter-value drop-down list and select the checkbox to each category of recommendation you want to view. Leave the checkboxes for unnecessary categories clear. Optional: Add additional filters to further refine the list. Only recommendations from the selected categories are shown in the list. Verification After applying filters, you can view the updated recommendations list. The applied filters are added to the default filters. 4.5.5.2. Removing filters from Insights Advisor recommendations You can apply multiple filters to the list of recommendations. When ready, you can remove them individually or completely reset them. Removing filters individually Click the X icon to each filter, including the default filters, to remove them individually. Removing all non-default filters Click Reset filters to remove only the filters that you applied, leaving the default filters in place. 4.5.6. Disabling Insights Advisor recommendations You can disable specific recommendations that affect your clusters, so that they no longer appear in your reports. It is possible to disable a recommendation for a single cluster or all of your clusters. Note Disabling a recommendation for all of your clusters also applies to any future clusters. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Optional: Use the Clusters Impacted and Status filters as needed. Disable an alert by using one of the following methods: To disable an alert: Click the Options menu for that alert, and then click Disable recommendation . Enter a justification note and click Save . To view the clusters affected by this alert before disabling the alert: Click the name of the recommendation to disable. You are directed to the single recommendation page. Review the list of clusters in the Affected clusters section. Click Actions Disable recommendation to disable the alert for all of your clusters. Enter a justification note and click Save . 4.5.7. Enabling a previously disabled Insights Advisor recommendation When a recommendation is disabled for all clusters, you no longer see the recommendation in the Insights Advisor. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. Your cluster is registered on OpenShift Cluster Manager . You are logged in to OpenShift Cluster Manager . Procedure Navigate to Advisor Recommendations on OpenShift Cluster Manager . Filter the recommendations to display on the disabled recommendations: From the Status drop-down menu, select Status . From the Filter by status drop-down menu, select Disabled . Optional: Clear the Clusters impacted filter. Locate the recommendation to enable. Click the Options menu , and then click Enable recommendation . 4.5.8. Displaying the Insights status in the web console Insights repeatedly analyzes your cluster and you can display the status of identified potential issues of your cluster in the OpenShift Container Platform web console. This status shows the number of issues in the different categories and, for further details, links to the reports in OpenShift Cluster Manager . Prerequisites Your cluster is registered in OpenShift Cluster Manager . Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console. Procedure Navigate to Home Overview in the OpenShift Container Platform web console. Click Insights on the Status card. The pop-up window lists potential issues grouped by risk. Click the individual categories or View all recommendations in Insights Advisor to display more details. 4.6. Using the Insights Operator The Insights Operator periodically gathers configuration and component failure status and, by default, reports that data every two hours to Red Hat. This information enables Red Hat to assess configuration and deeper failure data than is reported through Telemetry. Users of OpenShift Container Platform can display the report in the Insights Advisor service on Red Hat Hybrid Cloud Console. Additional resources The Insights Operator is installed and enabled by default. If you need to opt out of remote health reporting, see Opting out of remote health reporting . For more information on using Insights Advisor to identify issues with your cluster, see Using Insights to identify issues with your cluster . 4.6.1. Configuring Insights Operator Insights Operator configuration is a combination of the default Operator configuration and the configuration that is stored in either the insights-config ConfigMap object in the openshift-insights namespace, OR in the support secret in the openshift-config namespace. When a ConfigMap object or support secret exists, the contained attribute values override the default Operator configuration values. If both a ConfigMap object and a support secret exist, the Operator reads the ConfigMap object. The ConfigMap object does not exist by default, so an OpenShift Container Platform cluster administrator must create it. ConfigMap object configuration structure This example of an insights-config ConfigMap object ( config.yaml configuration) shows configuration options using standard YAML formatting. Configurable attributes and default values The table below describes the available configuration attributes: Note The insights-config ConfigMap object follows standard YAML formatting, wherein child values are below the parent attribute and indented two spaces. For the Obfuscation attribute, enter values as bulleted children of the parent attribute. Table 4.1. Insights Operator configurable attributes Attribute name Description Value type Default value Obfuscation: - networking Enables the global obfuscation of IP addresses and the cluster domain name. Boolean false Obfuscation: - workload_names Obfuscate data coming from the Deployment Validation Operator if it is installed. Boolean false sca: interval Specifies the frequency of the simple content access entitlements download. Time interval 8h sca: disabled Disables the simple content access entitlements download. Boolean false alerting: disabled Disables Insights Operator alerts to the cluster Prometheus instance. Boolean false httpProxy , httpsProxy , noProxy Set custom proxy for Insights Operator URL No default 4.6.1.1. Creating the insights-config ConfigMap object This procedure describes how to create the insights-config ConfigMap object for the Insights Operator to set custom configurations. Important Red Hat recommends you consult Red Hat Support before making changes to the default Insights Operator configuration. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with cluster-admin role. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click Create ConfigMap . Select Configure via: YAML view and enter your configuration preferences, for example apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false Optional: Select Form view and enter the necessary information that way. In the ConfigMap Name field, enter insights-config . In the Key field, enter config.yaml . For the Value field, either browse for a file to drag and drop into the field or enter your configuration parameters manually. Click Create and you can see the ConfigMap object and configuration information. 4.6.2. Understanding Insights Operator alerts The Insights Operator declares alerts through the Prometheus monitoring system to the Alertmanager. You can view these alerts in the Alerting UI in the OpenShift Container Platform web console by using one of the following methods: In the Administrator perspective, click Observe Alerting . In the Developer perspective, click Observe <project_name> Alerts tab. Currently, Insights Operator sends the following alerts when the conditions are met: Table 4.2. Insights Operator alerts Alert Description InsightsDisabled Insights Operator is disabled. SimpleContentAccessNotAvailable Simple content access is not enabled in Red Hat Subscription Management. InsightsRecommendationActive Insights has an active recommendation for the cluster. 4.6.2.1. Disabling Insights Operator alerts To prevent the Insights Operator from sending alerts to the cluster Prometheus instance, you create or edit the insights-config ConfigMap object. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. If the insights-config ConfigMap object does not exist, you must create it when you first add custom configurations. Note that configurations within the ConfigMap object take precedence over the default settings defined in the config/pod.yaml file. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the alerting attribute to disabled: true . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | alerting: disabled: true # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml alerting attribute is set to disabled: true . After you save the changes, Insights Operator no longer sends alerts to the cluster Prometheus instance. 4.6.2.2. Enabling Insights Operator alerts When alerts are disabled, the Insights Operator no longer sends alerts to the cluster Prometheus instance. You can reenable them. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the alerting attribute to disabled: false . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | alerting: disabled: false # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml alerting attribute is set to disabled: false . After you save the changes, Insights Operator again sends alerts to the cluster Prometheus instance. 4.6.3. Downloading your Insights Operator archive Insights Operator stores gathered data in an archive located in the openshift-insights namespace of your cluster. You can download and review the data that is gathered by the Insights Operator. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Find the name of the running pod for the Insights Operator: USD oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running Copy the recent data archives collected by the Insights Operator: USD oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1 1 Replace <insights_operator_pod_name> with the pod name output from the preceding command. The recent Insights Operator archives are now available in the insights-data directory. 4.6.4. Running an Insights Operator gather operation You can run Insights Operator data gather operations on demand. The following procedures describe how to run the default list of gather operations using the OpenShift web console or CLI. You can customize the on demand gather function to exclude any gather operations you choose. Disabling gather operations from the default list degrades Insights Advisor's ability to offer effective recommendations for your cluster. If you have previously disabled Insights Operator gather operations in your cluster, this procedure will override those parameters. Important The DataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note If you enable Technology Preview in your cluster, the Insights Operator runs gather operations in individual pods. This is part of the Technology Preview feature set for the Insights Operator and supports the new data gathering features. 4.6.4.1. Viewing Insights Operator gather durations You can view the time it takes for the Insights Operator to gather the information contained in the archive. This helps you to understand Insights Operator resource usage and issues with Insights Advisor. Prerequisites A recent copy of your Insights Operator archive. Procedure From your archive, open /insights-operator/gathers.json . The file contains a list of Insights Operator gather operations: { "name": "clusterconfig/authentication", "duration_in_ms": 730, 1 "records_count": 1, "errors": null, "panic": null } 1 duration_in_ms is the amount of time in milliseconds for each gather operation. Inspect each gather operation for abnormalities. 4.6.4.2. Running an Insights Operator gather operation from the web console To collect data, you can run an Insights Operator gather operation by using the OpenShift Container Platform web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure On the console, select Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, in the Search by name field, find the DataGather resource definition, and then click it. On the CustomResourceDefinition details page, click the Instances tab. Click Create DataGather . To create a new DataGather operation, edit the following configuration file and then save your changes. apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled 1 Under metadata , replace <your_data_gather> with a unique name for the gather operation. 2 Under gatherers , specify any individual gather operations that you intend to disable. In the example provided, workloads is the only data gather operation that is disabled and all of the other default operations are set to run. When the spec parameter is empty, all of the default gather operations run. Important Do not add a prefix of periodic-gathering- to the name of your gather operation because this string is reserved for other administrative operations and might impact the intended gather operation. Verification On the console, select to Workloads Pods . On the Pods page, go to the Project pull-down menu, and then select Show default projects . Select the openshift-insights project from the Project pull-down menu. Check that your new gather operation is prefixed with your chosen name under the list of pods in the openshift-insights project. Upon completion, the Insights Operator automatically uploads the data to Red Hat for processing. 4.6.4.3. Running an Insights Operator gather operation from the OpenShift CLI You can run an Insights Operator gather operation by using the OpenShift Container Platform command line interface. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Enter the following command to run the gather operation: USD oc apply -f <your_datagather_definition>.yaml Replace <your_datagather_definition>.yaml with a configuration file that contains the following parameters: apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled 1 Under metadata , replace <your_data_gather> with a unique name for the gather operation. 2 Under gatherers , specify any individual gather operations that you intend to disable. In the example provided, workloads is the only data gather operation that is disabled and all of the other default operations are set to run. When the spec parameter is empty, all of the default gather operations run. Important Do not add a prefix of periodic-gathering- to the name of your gather operation because this string is reserved for other administrative operations and might impact the intended gather operation. Verification Check that your new gather operation is prefixed with your chosen name under the list of pods in the openshift-insights project. Upon completion, the Insights Operator automatically uploads the data to Red Hat for processing. Additional resources Insights Operator Gathered Data GitHub repository 4.6.4.4. Disabling the Insights Operator gather operations You can disable the Insights Operator gather operations. Disabling the gather operations gives you the ability to increase privacy for your organization as Insights Operator will no longer gather and send Insights cluster reports to Red Hat. This will disable Insights analysis and recommendations for your cluster without affecting other core functions that require communication with Red Hat such as cluster transfers. You can view a list of attempted gather operations for your cluster from the /insights-operator/gathers.json file in your Insights Operator archive. Be aware that some gather operations only occur when certain conditions are met and might not appear in your most recent archive. Important The InsightsDataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note If you enable Technology Preview in your cluster, the Insights Operator runs gather operations in individual pods. This is part of the Technology Preview feature set for the Insights Operator and supports the new data gathering features. Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the InsightsDataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click cluster , and then click the YAML tab. Disable the gather operations by performing one of the following edits to the InsightsDataGather configuration file: To disable all the gather operations, enter all under the disabledGatherers key: apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: .... spec: 1 gatherConfig: disabledGatherers: - all 2 1 The spec parameter specifies gather configurations. 2 The all value disables all gather operations. To disable individual gather operations, enter their values under the disabledGatherers key: spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info 1 Example individual gather operation Click Save . After you save the changes, the Insights Operator gather configurations are updated and the operations will no longer occur. Note Disabling gather operations degrades Insights Advisor's ability to offer effective recommendations for your cluster. 4.6.4.5. Enabling the Insights Operator gather operations You can enable the Insights Operator gather operations, if the gather operations have been disabled. Important The InsightsDataGather custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. Procedure Navigate to Administration CustomResourceDefinitions . On the CustomResourceDefinitions page, use the Search by name field to find the InsightsDataGather resource definition and click it. On the CustomResourceDefinition details page, click the Instances tab. Click cluster , and then click the YAML tab. Enable the gather operations by performing one of the following edits: To enable all disabled gather operations, remove the gatherConfig stanza: apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: .... spec: gatherConfig: 1 disabledGatherers: all 1 Remove the gatherConfig stanza to enable all gather operations. To enable individual gather operations, remove their values under the disabledGatherers key: spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info 1 Remove one or more gather operations. Click Save . After you save the changes, the Insights Operator gather configurations are updated and the affected gather operations start. Note Disabling gather operations degrades Insights Advisor's ability to offer effective recommendations for your cluster. 4.6.5. Obfuscating Deployment Validation Operator data Cluster administrators can configure the Insight Operator to obfuscate data from the Deployment Validation Operator (DVO), if the Operator is installed. When the workload_names value is added to the insights-config ConfigMap object, workload names-rather than UIDs-are displayed in Insights for Openshift, making them more recognizable for cluster administrators. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console with the "cluster-admin" role. The insights-config ConfigMap object exists in the openshift-insights namespace. The cluster is self managed and the Deployment Validation Operator is installed. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the obfuscation attribute with the workload_names value. apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | dataReporting: obfuscation: - workload_names # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml obfuscation attribute is set to - workload_names . 4.7. Using remote health reporting in a restricted network You can manually gather and upload Insights Operator archives to diagnose issues from a restricted network. To use the Insights Operator in a restricted network, you must: Create a copy of your Insights Operator archive. Upload the Insights Operator archive to console.redhat.com . Additionally, you can choose to obfuscate the Insights Operator data before upload. 4.7.1. Running an Insights Operator gather operation You must run a gather operation to create an Insights Operator archive. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . Procedure Create a file named gather-job.yaml using this template: apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}] Copy your insights-operator image version: USD oc get -n openshift-insights deployment insights-operator -o yaml Example output apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights # ... spec: template: # ... spec: containers: - args: # ... image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 # ... 1 Specifies your insights-operator image version. Paste your image version in gather-job.yaml : apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job # ... spec: # ... template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts: 1 Replace any existing value with your insights-operator image version. Create the gather job: USD oc apply -n openshift-insights -f gather-job.yaml Find the name of the job pod: USD oc describe -n openshift-insights job/insights-operator-job Example output Name: insights-operator-job Namespace: openshift-insights # ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job> where insights-operator-job-<your_job> is the name of the pod. Verify that the operation has finished: USD oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator Example output I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms Save the created archive: USD oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data Clean up the job: USD oc delete -n openshift-insights job insights-operator-job 4.7.2. Uploading an Insights Operator archive You can manually upload an Insights Operator archive to console.redhat.com to diagnose potential issues. Prerequisites You are logged in to OpenShift Container Platform as cluster-admin . You have a workstation with unrestricted internet access. You have created a copy of the Insights Operator archive. Procedure Download the dockerconfig.json file: USD oc extract secret/pull-secret -n openshift-config --to=. Copy your "cloud.openshift.com" "auth" token from the dockerconfig.json file: { "auths": { "cloud.openshift.com": { "auth": " <your_token> ", "email": "[email protected]" } } Upload the archive to console.redhat.com : USD curl -v -H "User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> " -H "Authorization: Bearer <your_token> " -F "upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar" https://console.redhat.com/api/ingress/v1/upload where <cluster_id> is your cluster ID, <your_token> is the token from your pull secret, and <path_to_archive> is the path to the Insights Operator archive. If the operation is successful, the command returns a "request_id" and "account_number" : Example output * Connection #0 to host console.redhat.com left intact {"request_id":"393a7cf1093e434ea8dd4ab3eb28884c","upload":{"account_number":"6274079"}}% Verification steps Log in to https://console.redhat.com/openshift . Click the Cluster List menu in the left pane. To display the details of the cluster, click the cluster name. Open the Insights Advisor tab of the cluster. If the upload was successful, the tab displays one of the following: Your cluster passed all recommendations , if Insights Advisor did not identify any issues. A list of issues that Insights Advisor has detected, prioritized by risk (low, moderate, important, and critical). 4.7.3. Enabling Insights Operator data obfuscation You can enable obfuscation to mask sensitive and identifiable IPv4 addresses and cluster base domains that the Insights Operator sends to console.redhat.com . Warning Although this feature is available, Red Hat recommends keeping obfuscation disabled for a more effective support experience. Obfuscation assigns non-identifying values to cluster IPv4 addresses, and uses a translation table that is retained in memory to change IP addresses to their obfuscated versions throughout the Insights Operator archive before uploading the data to console.redhat.com . For cluster base domains, obfuscation changes the base domain to a hardcoded substring. For example, cluster-api.openshift.example.com becomes cluster-api.<CLUSTER_BASE_DOMAIN> . The following procedure enables obfuscation using the support secret in the openshift-config namespace. Prerequisites You are logged in to the OpenShift Container Platform web console as cluster-admin . Procedure Navigate to Workloads Secrets . Select the openshift-config project. Search for the support secret using the Search by name field. If it does not exist, click Create Key/value secret to create it. Click the Options menu , and then click Edit Secret . Click Add Key/Value . Create a key named enableGlobalObfuscation with a value of true , and click Save . Navigate to Workloads Pods Select the openshift-insights project. Find the insights-operator pod. To restart the insights-operator pod, click the Options menu , and then click Delete Pod . Verification Navigate to Workloads Secrets . Select the openshift-insights project. Search for the obfuscation-translation-table secret using the Search by name field. If the obfuscation-translation-table secret exists, then obfuscation is enabled and working. Alternatively, you can inspect /insights-operator/gathers.json in your Insights Operator archive for the value "is_global_obfuscation_enabled": true . Additional resources For more information on how to download your Insights Operator archive, see Showing data collected by the Insights Operator . 4.8. Importing simple content access entitlements with Insights Operator Insights Operator periodically imports your simple content access entitlements from OpenShift Cluster Manager and stores them in the etc-pki-entitlement secret in the openshift-config-managed namespace. Simple content access is a capability in Red Hat subscription tools which simplifies the behavior of the entitlement tooling. This feature makes it easier to consume the content provided by your Red Hat subscriptions without the complexity of configuring subscription tooling. Note Previously, a cluster administrator would create or edit the Insights Operator configuration using a support secret in the openshift-config namespace. Red Hat Insights now supports the creation of a ConfigMap object to configure the Operator. The Operator gives preference to the config map configuration over the support secret if both exist. The Insights Operator imports simple content access entitlements every eight hours, but can be configured or disabled using the insights-config ConfigMap object in the openshift-insights namespace. Note Simple content access must be enabled in Red Hat Subscription Management for the importing to function. Additional resources See About simple content access in the Red Hat Subscription Central documentation, for more information about simple content access. See Using Red Hat subscriptions in builds for more information about using simple content access entitlements in OpenShift Container Platform builds. 4.8.1. Configuring simple content access import interval You can configure how often the Insights Operator imports the simple content access (sca) entitlements by using the insights-config ConfigMap object in the openshift-insights namespace. The entitlement import normally occurs every eight hours, but you can shorten this sca interval if you update your simple content access configuration in the insights-config ConfigMap object. This procedure describes how to update the import interval to two hours (2h). You can specify hours (h) or hours and minutes, for example: 2h30m. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. Set the sca attribute in the file to interval: 2h to import content every two hours. apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: interval: 2h # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to interval: 2h . 4.8.2. Disabling simple content access import You can disable the importing of simple content access entitlements by using the insights-config ConfigMap object in the openshift-insights namespace. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as cluster-admin . The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the sca attribute to disabled: true . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: disabled: true # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to disabled: true . 4.8.3. Enabling a previously disabled simple content access import If the importing of simple content access entitlements is disabled, the Insights Operator does not import simple content access entitlements. You can change this behavior. Prerequisites Remote health reporting is enabled, which is the default. You are logged in to the OpenShift Container Platform web console as a user with the cluster-admin role. The insights-config ConfigMap object exists in the openshift-insights namespace. Procedure Go to Workloads ConfigMaps and select Project: openshift-insights . Click on the insights-config ConfigMap object to open it. Click Actions and select Edit ConfigMap . Click the YAML view radio button. In the file, set the sca attribute to disabled: false . apiVersion: v1 kind: ConfigMap # ... data: config.yaml: | sca: disabled: false # ... Click Save . The insights-config config-map details page opens. Verify that the value of the config.yaml sca attribute is set to disabled: false .
|
[
"curl -G -k -H \"Authorization: Bearer USD(oc whoami -t)\" https://USD(oc get route prometheus-k8s-federate -n openshift-monitoring -o jsonpath=\"{.spec.host}\")/federate --data-urlencode 'match[]={__name__=~\"cluster:usage:.*\"}' --data-urlencode 'match[]={__name__=\"count:up0\"}' --data-urlencode 'match[]={__name__=\"count:up1\"}' --data-urlencode 'match[]={__name__=\"cluster_version\"}' --data-urlencode 'match[]={__name__=\"cluster_version_available_updates\"}' --data-urlencode 'match[]={__name__=\"cluster_version_capability\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_up\"}' --data-urlencode 'match[]={__name__=\"cluster_operator_conditions\"}' --data-urlencode 'match[]={__name__=\"cluster_version_payload\"}' --data-urlencode 'match[]={__name__=\"cluster_installer\"}' --data-urlencode 'match[]={__name__=\"cluster_infrastructure_provider\"}' --data-urlencode 'match[]={__name__=\"cluster_feature_set\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_object_counts:sum\"}' --data-urlencode 'match[]={__name__=\"ALERTS\",alertstate=\"firing\"}' --data-urlencode 'match[]={__name__=\"code:apiserver_request_total:rate:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:capacity_memory_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"workload:cpu_usage_cores:sum\"}' --data-urlencode 'match[]={__name__=\"workload:memory_usage_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:virt_platform_nodes:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:node_instance_type_count:sum\"}' --data-urlencode 'match[]={__name__=\"cnv:vmi_status_running:count\"}' --data-urlencode 'match[]={__name__=\"cluster:vmi_request_cpu_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_cores:sum\"}' --data-urlencode 'match[]={__name__=\"node_role_os_version_machine:cpu_capacity_sockets:sum\"}' --data-urlencode 'match[]={__name__=\"subscription_sync_total\"}' --data-urlencode 'match[]={__name__=\"olm_resolution_duration_seconds\"}' --data-urlencode 'match[]={__name__=\"csv_succeeded\"}' --data-urlencode 'match[]={__name__=\"csv_abnormal\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolumeclaim_resource_requests_storage_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kubelet_volume_stats_used_bytes:provisioner:sum\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_cluster_total_used_raw_bytes\"}' --data-urlencode 'match[]={__name__=\"ceph_health_status\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_total_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_raw_capacity_used_bytes\"}' --data-urlencode 'match[]={__name__=\"odf_system_health_status\"}' --data-urlencode 'match[]={__name__=\"job:ceph_osd_metadata:count\"}' --data-urlencode 'match[]={__name__=\"job:kube_pv:count\"}' --data-urlencode 'match[]={__name__=\"job:odf_system_pvs:count\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_pools_iops_bytes:total\"}' --data-urlencode 'match[]={__name__=\"job:ceph_versions_running:count\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_unhealthy_buckets:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_bucket_count:sum\"}' --data-urlencode 'match[]={__name__=\"job:noobaa_total_object_count:sum\"}' --data-urlencode 'match[]={__name__=\"odf_system_bucket_count\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"odf_system_objects_total\", system_type=\"OCS\", system_vendor=\"Red Hat\"}' --data-urlencode 'match[]={__name__=\"noobaa_accounts_num\"}' --data-urlencode 'match[]={__name__=\"noobaa_total_usage\"}' --data-urlencode 'match[]={__name__=\"console_url\"}' --data-urlencode 'match[]={__name__=\"cluster:ovnkube_master_egress_routing_via_host:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_instances:max\"}' --data-urlencode 'match[]={__name__=\"cluster:network_attachment_definition_enabled_instance_up:max\"}' --data-urlencode 'match[]={__name__=\"cluster:ingress_controller_aws_nlb_active:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:min\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:max\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:avg\"}' --data-urlencode 'match[]={__name__=\"cluster:route_metrics_controller_routes_per_shard:median\"}' --data-urlencode 'match[]={__name__=\"cluster:openshift_route_info:tls_termination:sum\"}' --data-urlencode 'match[]={__name__=\"insightsclient_request_send_total\"}' --data-urlencode 'match[]={__name__=\"cam_app_workload_migrations\"}' --data-urlencode 'match[]={__name__=\"cluster:apiserver_current_inflight_requests:sum:max_over_time:2m\"}' --data-urlencode 'match[]={__name__=\"cluster:alertmanager_integrations:max\"}' --data-urlencode 'match[]={__name__=\"cluster:telemetry_selected_series:count\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_series:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:prometheus_tsdb_head_samples_appended_total:sum\"}' --data-urlencode 'match[]={__name__=\"monitoring:container_memory_working_set_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_series_added:topk3_sum1h\"}' --data-urlencode 'match[]={__name__=\"namespace_job:scrape_samples_post_metric_relabeling:topk3\"}' --data-urlencode 'match[]={__name__=\"monitoring:haproxy_server_http_responses_total:sum\"}' --data-urlencode 'match[]={__name__=\"rhmi_status\"}' --data-urlencode 'match[]={__name__=\"status:upgrading:version:rhoam_state:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_critical_alerts:max\"}' --data-urlencode 'match[]={__name__=\"state:rhoam_warning_alerts:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_percentile:max\"}' --data-urlencode 'match[]={__name__=\"rhoam_7d_slo_remaining_error_budget:max\"}' --data-urlencode 'match[]={__name__=\"cluster_legacy_scheduler_policy\"}' --data-urlencode 'match[]={__name__=\"cluster_master_schedulable\"}' --data-urlencode 'match[]={__name__=\"che_workspace_status\"}' --data-urlencode 'match[]={__name__=\"che_workspace_started_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_failure_total\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_sum\"}' --data-urlencode 'match[]={__name__=\"che_workspace_start_time_seconds_count\"}' --data-urlencode 'match[]={__name__=\"cco_credentials_mode\"}' --data-urlencode 'match[]={__name__=\"cluster:kube_persistentvolume_plugin_type_counts:sum\"}' --data-urlencode 'match[]={__name__=\"visual_web_terminal_sessions_total\"}' --data-urlencode 'match[]={__name__=\"acm_managed_cluster_info\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_vcenter_info:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_esxi_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:vsphere_node_hw_version_total:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:build_by_strategy:sum\"}' --data-urlencode 'match[]={__name__=\"rhods_aggregate_availability\"}' --data-urlencode 'match[]={__name__=\"rhods_total_users\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_wal_fsync_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_network_peer_round_trip_time_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_mvcc_db_total_size_in_use_in_bytes:sum\"}' --data-urlencode 'match[]={__name__=\"instance:etcd_disk_backend_commit_duration_seconds:histogram_quantile\",quantile=\"0.99\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_storage_types\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_strategies\"}' --data-urlencode 'match[]={__name__=\"jaeger_operator_instances_agent_strategies\"}' --data-urlencode 'match[]={__name__=\"appsvcs:cores_by_product:sum\"}' --data-urlencode 'match[]={__name__=\"nto_custom_profiles:count\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_configmap\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_secret\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_failures_total\"}' --data-urlencode 'match[]={__name__=\"openshift_csi_share_mount_requests_total\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_backup_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:velero_restore_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_storage_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_redundancy_policy_info\"}' --data-urlencode 'match[]={__name__=\"eo_es_defined_delete_namespaces_total\"}' --data-urlencode 'match[]={__name__=\"eo_es_misconfigured_memory_resources_info\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_data_nodes_total:max\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_created_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:eo_es_documents_deleted_total:sum\"}' --data-urlencode 'match[]={__name__=\"pod:eo_es_shards_total:max\"}' --data-urlencode 'match[]={__name__=\"eo_es_cluster_management_state_info\"}' --data-urlencode 'match[]={__name__=\"imageregistry:imagestreamtags_count:sum\"}' --data-urlencode 'match[]={__name__=\"imageregistry:operations_count:sum\"}' --data-urlencode 'match[]={__name__=\"log_logging_info\"}' --data-urlencode 'match[]={__name__=\"log_collector_error_count_total\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_pipeline_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_input_info\"}' --data-urlencode 'match[]={__name__=\"log_forwarder_output_info\"}' --data-urlencode 'match[]={__name__=\"cluster:log_collected_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:log_logged_bytes_total:sum\"}' --data-urlencode 'match[]={__name__=\"cluster:kata_monitor_running_shim_count:sum\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_hostedclusters:max\"}' --data-urlencode 'match[]={__name__=\"platform:hypershift_nodepools:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_bucket_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_buckets_claims:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_resources:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_unhealthy_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_namespace_buckets:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_accounts:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_usage:max\"}' --data-urlencode 'match[]={__name__=\"namespace:noobaa_system_health_status:max\"}' --data-urlencode 'match[]={__name__=\"ocs_advanced_feature_usage\"}' --data-urlencode 'match[]={__name__=\"os_image_url_override:sum\"}' --data-urlencode 'match[]={__name__=\"openshift:openshift_network_operator_ipsec_state:info\"}'",
"INSIGHTS_OPERATOR_POD=USD(oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running)",
"oc cp openshift-insights/USDINSIGHTS_OPERATOR_POD:/var/lib/insights-operator ./insights-data",
"oc extract secret/pull-secret -n openshift-config --to=.",
"\"cloud.openshift.com\":{\"auth\":\"<hash>\",\"email\":\"<email_address>\"}",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' ><pull_secret_location> 1",
"oc registry login --registry=\"<registry>\" \\ 1 --auth-basic=\"<username>:<password>\" \\ 2 --to=<pull_secret_location> 3",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=<pull_secret_location> 1",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \" <email_address> \" } } }",
"oc get secret/pull-secret -n openshift-config --template='{{index .data \".dockerconfigjson\" | base64decode}}' > pull-secret",
"cp pull-secret pull-secret-backup",
"oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=pull-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: insights-config namespace: openshift-insights data: config.yaml: | dataReporting: obfuscation: - networking - workload_names sca: disabled: false interval: 2h alerting: disabled: false binaryData: {} immutable: false",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | alerting: disabled: false",
"oc get pods --namespace=openshift-insights -o custom-columns=:metadata.name --no-headers --field-selector=status.phase=Running",
"oc cp openshift-insights/<insights_operator_pod_name>:/var/lib/insights-operator ./insights-data 1",
"{ \"name\": \"clusterconfig/authentication\", \"duration_in_ms\": 730, 1 \"records_count\": 1, \"errors\": null, \"panic\": null }",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"oc apply -f <your_datagather_definition>.yaml",
"apiVersion: insights.openshift.io/v1alpha1 kind: DataGather metadata: name: <your_data_gather> 1 spec: gatherers: 2 - name: workloads state: Disabled",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: 1 gatherConfig: disabledGatherers: - all 2",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: config.openshift.io/v1alpha1 kind: InsightsDataGather metadata: . spec: gatherConfig: 1 disabledGatherers: all",
"spec: gatherConfig: disabledGatherers: - clusterconfig/container_images 1 - clusterconfig/host_subnets - workloads/workload_info",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | dataReporting: obfuscation: - workload_names",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job annotations: config.openshift.io/inject-proxy: insights-operator spec: backoffLimit: 6 ttlSecondsAfterFinished: 600 template: spec: restartPolicy: OnFailure serviceAccountName: operator nodeSelector: beta.kubernetes.io/os: linux node-role.kubernetes.io/master: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master operator: Exists - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 900 - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 900 volumes: - name: snapshots emptyDir: {} - name: service-ca-bundle configMap: name: service-ca-bundle optional: true initContainers: - name: insights-operator image: quay.io/openshift/origin-insights-operator:latest terminationMessagePolicy: FallbackToLogsOnError volumeMounts: - name: snapshots mountPath: /var/lib/insights-operator - name: service-ca-bundle mountPath: /var/run/configmaps/service-ca-bundle readOnly: true ports: - containerPort: 8443 name: https resources: requests: cpu: 10m memory: 70Mi args: - gather - -v=4 - --config=/etc/insights-operator/server.yaml containers: - name: sleepy image: quay.io/openshift/origin-base:latest args: - /bin/sh - -c - sleep 10m volumeMounts: [{name: snapshots, mountPath: /var/lib/insights-operator}]",
"oc get -n openshift-insights deployment insights-operator -o yaml",
"apiVersion: apps/v1 kind: Deployment metadata: name: insights-operator namespace: openshift-insights spec: template: spec: containers: - args: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1",
"apiVersion: batch/v1 kind: Job metadata: name: insights-operator-job spec: template: spec: initContainers: - name: insights-operator image: image: registry.ci.openshift.org/ocp/4.15-2023-10-12-212500@sha256:a0aa581400805ad0... 1 terminationMessagePolicy: FallbackToLogsOnError volumeMounts:",
"oc apply -n openshift-insights -f gather-job.yaml",
"oc describe -n openshift-insights job/insights-operator-job",
"Name: insights-operator-job Namespace: openshift-insights Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 7m18s job-controller Created pod: insights-operator-job-<your_job>",
"oc logs -n openshift-insights insights-operator-job-<your_job> insights-operator",
"I0407 11:55:38.192084 1 diskrecorder.go:34] Wrote 108 records to disk in 33ms",
"oc cp openshift-insights/insights-operator-job- <your_job> :/var/lib/insights-operator ./insights-data",
"oc delete -n openshift-insights job insights-operator-job",
"oc extract secret/pull-secret -n openshift-config --to=.",
"{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \" <your_token> \", \"email\": \"[email protected]\" } }",
"curl -v -H \"User-Agent: insights-operator/one10time200gather184a34f6a168926d93c330 cluster/ <cluster_id> \" -H \"Authorization: Bearer <your_token> \" -F \"upload=@ <path_to_archive> ; type=application/vnd.redhat.openshift.periodic+tar\" https://console.redhat.com/api/ingress/v1/upload",
"* Connection #0 to host console.redhat.com left intact {\"request_id\":\"393a7cf1093e434ea8dd4ab3eb28884c\",\"upload\":{\"account_number\":\"6274079\"}}%",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: interval: 2h",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: true",
"apiVersion: v1 kind: ConfigMap data: config.yaml: | sca: disabled: false"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/support/remote-health-monitoring-with-connected-clusters
|
Chapter 1. Introduction to performance tuning
|
Chapter 1. Introduction to performance tuning This document provides guidelines for tuning Red Hat Satellite for performance and scalability. Although a lot of care has been given to make the content applicable to cover a wide set of use cases, if there is some use case which has not been covered, please feel free to reach out to Red Hat for support for the undocumented use case.
| null |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/tuning_performance_of_red_hat_satellite/introduction_to_performance_tuning_performance-tuning
|
1.2. Supported Virtual Machine Operating Systems
|
1.2. Supported Virtual Machine Operating Systems For information on the operating systems that can be virtualized as guest operating systems in Red Hat Virtualization, see https://access.redhat.com/articles/973163 . For information on customizing the operating systems, see Section 4.1, "Configuring Operating Systems with osinfo" .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/supported_virtual_machines
|
Deploying OpenShift Data Foundation using IBM Cloud
|
Deploying OpenShift Data Foundation using IBM Cloud Red Hat OpenShift Data Foundation 4.17 Instructions on deploying Red Hat OpenShift Data Foundation using IBM Cloud Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on IBM cloud clusters.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_cloud/index
|
Chapter 9. Installing a cluster on Azure into a government region
|
Chapter 9. Installing a cluster on Azure into a government region In OpenShift Container Platform version 4.16, you can install a cluster on Microsoft Azure into a government region. To configure the government region, you modify parameters in the install-config.yaml file before you install the cluster. 9.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated government region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain long-term credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 9.2. Azure government regions OpenShift Container Platform supports deploying a cluster to Microsoft Azure Government (MAG) regions. MAG is specifically designed for US government agencies at the federal, state, and local level, as well as contractors, educational institutions, and other US customers that must run sensitive workloads on Azure. MAG is composed of government-only data center regions, all granted an Impact Level 5 Provisional Authorization . Installing to a MAG region requires manually configuring the Azure Government dedicated cloud instance and region in the install-config.yaml file. You must also update your service principal to reference the appropriate government environment. Note The Azure government region cannot be selected using the guided terminal prompts from the installation program. You must define the region manually in the install-config.yaml file. Remember to also set the dedicated cloud instance, like AzureUSGovernmentCloud , based on the region specified. 9.3. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Your cluster resources might be shared between other clusters on the network. Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 9.3.1. Private clusters in Azure To create a private cluster on Microsoft Azure, you must provide an existing private VNet and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. Depending how your network connects to the private VNET, you might need to use a DNS forwarder to resolve the cluster's private DNS records. The cluster's machines use 168.63.129.16 internally for DNS resolution. For more information, see What is Azure Private DNS? and What is IP address 168.63.129.16? in the Azure documentation. The cluster still requires access to internet to access the Azure APIs. The following items are not required or created when you install a private cluster: A BaseDomainResourceGroup , since the cluster does not create public records Public IP addresses Public DNS records Public endpoints 9.3.1.1. Limitations Private clusters on Azure are subject to only the limitations that are associated with the use of an existing VNet. 9.3.2. User-defined outbound routing In OpenShift Container Platform, you can choose your own outbound routing for a cluster to connect to the internet. This allows you to skip the creation of public IP addresses and the public load balancer. You can configure user-defined routing by modifying parameters in the install-config.yaml file before installing your cluster. A pre-existing VNet is required to use outbound routing when installing a cluster; the installation program is not responsible for configuring this. When configuring a cluster to use user-defined routing, the installation program does not create the following resources: Outbound rules for access to the internet. Public IPs for the public load balancer. Kubernetes Service object to add the cluster machines to the public load balancer for outbound requests. You must ensure the following items are available before setting user-defined routing: Egress to the internet is possible to pull container images, unless using an OpenShift image registry mirror. The cluster can access Azure APIs. Various allowlist endpoints are configured. You can reference these endpoints in the Configuring your firewall section. There are several pre-existing networking setups that are supported for internet access using user-defined routing. 9.4. About reusing a VNet for your OpenShift Container Platform cluster In OpenShift Container Platform 4.16, you can deploy a cluster into an existing Azure Virtual Network (VNet) in Microsoft Azure. If you do, you must also use existing subnets within the VNet and routing rules. By deploying OpenShift Container Platform into an existing Azure VNet, you might be able to avoid service limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. This is a good option to use if you cannot obtain the infrastructure creation permissions that are required to create the VNet. 9.4.1. Requirements for using your VNet When you deploy a cluster by using an existing VNet, you must perform additional network configuration before you install the cluster. In installer-provisioned infrastructure clusters, the installer usually creates the following components, but it does not create them when you install into an existing VNet: Subnets Route tables VNets Network Security Groups Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VNet, you must correctly configure it and its subnets for the installation program and the cluster to use. The installation program cannot subdivide network ranges for the cluster to use, set route tables for the subnets, or set VNet options like DHCP, so you must do so before you install the cluster. The cluster must be able to access the resource group that contains the existing VNet and subnets. While all of the resources that the cluster creates are placed in a separate resource group that it creates, some network resources are used from a separate group. Some cluster Operators must be able to access resources in both resource groups. For example, the Machine API controller attaches NICS for the virtual machines that it creates to subnets from the networking resource group. Your VNet must meet the following characteristics: The VNet's CIDR block must contain the Networking.MachineCIDR range, which is the IP address pool for cluster machines. The VNet and its subnets must belong to the same resource group, and the subnets must be configured to use Azure-assigned DHCP IP addresses instead of static IP addresses. You must provide two subnets within your VNet, one for the control plane machines and one for the compute machines. Because Azure distributes machines in different availability zones within the region that you specify, your cluster will have high availability by default. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the specified subnets exist. There are two private subnets, one for the control plane machines and one for the compute machines. The subnet CIDRs belong to the machine CIDR that you specified. Machines are not provisioned in availability zones that you do not provide private subnets for. If required, the installation program creates public load balancers that manage the control plane and worker nodes, and Azure allocates a public IP address to them. Note If you destroy a cluster that uses an existing VNet, the VNet is not deleted. 9.4.1.1. Network security group requirements The network security groups for the subnets that host the compute and control plane machines require specific access to ensure that the cluster communication is correct. You must create rules to allow access to the required cluster communication ports. Important The network security group rules must be in place before you install the cluster. If you attempt to install a cluster without the required access, the installation program cannot reach the Azure APIs, and installation fails. Table 9.1. Required ports Port Description Control plane Compute 80 Allows HTTP traffic x 443 Allows HTTPS traffic x 6443 Allows communication to the control plane machines x 22623 Allows internal communication to the machine config server for provisioning machines x If you are using Azure Firewall to restrict the internet access, then you can configure Azure Firewall to allow the Azure APIs . A network security group rule is not needed. Important Currently, there is no supported way to block or restrict the machine config server endpoint. The machine config server must be exposed to the network so that newly-provisioned machines, which have no existing configuration or state, are able to fetch their configuration. In this model, the root of trust is the certificate signing requests (CSR) endpoint, which is where the kubelet sends its certificate signing request for approval to join the cluster. Because of this, machine configs should not be used to distribute sensitive information, such as secrets and certificates. To ensure that the machine config server endpoints, ports 22623 and 22624, are secured in bare metal scenarios, customers must configure proper network policies. Because cluster components do not modify the user-provided network security groups, which the Kubernetes controllers update, a pseudo-network security group is created for the Kubernetes controller to modify without impacting the rest of the environment. Table 9.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If you configure an external NTP time server, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 9.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Additional resources About the OpenShift SDN network plugin Configuring your firewall 9.4.2. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resources in your clouds than others. For example, you might be able to create application-specific items, like instances, storage, and load balancers, but not networking-related components such as VNets, subnet, or ingress rules. The Azure credentials that you use when you create your cluster do not need the networking permissions that are required to make VNets and core networking components within the VNet, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as load balancers, security groups, storage accounts, and nodes. 9.4.3. Isolation between clusters Because the cluster is unable to modify network security groups in an existing subnet, there is no way to isolate clusters from each other on the VNet. 9.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 9.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 9.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 9.8. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure 9.8.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 9.4. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 9.8.2. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 9.1. Machine types based on 64-bit x86 architecture standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSMediumMemoryv2Family standardMISMediumMemoryv2Family standardMSFamily standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 9.8.3. Enabling trusted launch for Azure VMs You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules . See the Azure documentation about virtual machine sizes to learn what sizes of virtual machines support these features. Important Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 1 Specify controlPlane.platform.azure or compute.platform.azure to enable trusted launch on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to enable trusted launch on all nodes. 2 Enable trusted launch features. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 9.8.4. Enabling confidential VMs You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes. Important Using confidential VMs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can use confidential VMs with the following VM sizes: DCasv5-series DCadsv5-series ECasv5-series ECadsv5-series Important Confidential VMs are currently not supported on 64-bit ARM architectures. Prerequisites You have created an install-config.yaml file. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add the following stanza: controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5 1 Specify controlPlane.platform.azure or compute.platform.azure to deploy confidential VMs on only control plane or compute nodes respectively. Specify platform.azure.defaultMachinePlatform to deploy confidential VMs on all nodes. 2 Enable confidential VMs. 3 Enable secure boot. For more information, see the Azure documentation about secure boot . 4 Enable the virtualized Trusted Platform Module. For more information, see the Azure documentation about virtualized Trusted Platform Modules . 5 Specify VMGuestStateOnly to encrypt the VM guest state. 9.8.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: usgovvirginia resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzureUSGovernmentCloud 20 pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24 1 10 21 Required. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The publisher , offer , sku , and version parameters under platform.azure.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the parameters under controlPlane.platform.azure.osImage or compute.platform.azure.osImage are set, they override the platform.azure.defaultMachinePlatform.osImage parameters. 13 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 15 If you use an existing VNet, specify the name of the resource group that contains it. 16 If you use an existing VNet, specify its name. 17 If you use an existing VNet, specify the name of the subnet to host the control plane machines. 18 If you use an existing VNet, specify the name of the subnet to host the compute machines. 19 You can customize your own outbound routing. Configuring user-defined routing prevents exposing external endpoints in your cluster. User-defined routing for egress requires deploying your cluster to an existing VNet. 20 Specify the name of the Azure cloud environment to deploy your cluster to. Set AzureUSGovernmentCloud to deploy to a Microsoft Azure Government (MAG) region. The default value is AzurePublicCloud . 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 How to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster, which cannot be accessed from the internet. The default value is External . 9.8.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 9.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have an Azure subscription ID and tenant ID. If you are installing the cluster using a service principal, you have its application ID and password. If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from. If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites: You have its client ID. You have assigned it to the virtual machine that you will run the installation program from. Procedure Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the ~/.azure/ directory and delete the osServicePrincipal.json configuration file. Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a installation. Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . If the installation program cannot locate the osServicePrincipal.json configuration file from a installation, you are prompted for Azure subscription and authentication values. Enter the following Azure parameter values for your subscription: azure subscription id : Enter the subscription ID to use for the cluster. azure tenant id : Enter the tenant ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id : If you are using a service principal, enter its application ID. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity, specify its client ID. Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret : If you are using a service principal, enter its password. If you are using a system-assigned managed identity, leave this value blank. If you are using a user-assigned managed identity,leave this value blank. If previously not detected, the installation program creates an osServicePrincipal.json configuration file and stores this file in the ~/.azure/ directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 9.10. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.16. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.16 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.16 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 9.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 9.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 9.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"The cluster is configured so that the Operators do not create public records for the cluster and all cluster machines are placed in the private subnets that you specify.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"controlPlane: 1 platform: azure: settings: securityType: TrustedLaunch 2 trustedLaunch: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4",
"controlPlane: 1 platform: azure: settings: securityType: ConfidentialVM 2 confidentialVM: uefiSettings: secureBoot: Enabled 3 virtualizedTrustedPlatformModule: Enabled 4 osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly 5",
"apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id osImage: publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: osImage: 12 publisher: example_publisher_name offer: example_image_offer sku: example_offer_sku version: example_image_version ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 13 region: usgovvirginia resourceGroupName: existing_resource_group 14 networkResourceGroupName: vnet_resource_group 15 virtualNetwork: vnet 16 controlPlaneSubnet: control_plane_subnet 17 computeSubnet: compute_subnet 18 outboundType: UserDefinedRouting 19 cloudName: AzureUSGovernmentCloud 20 pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 publish: Internal 24",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_azure/installing-azure-government-region
|
2.8.9.2.4. IPTables Match Options
|
2.8.9.2.4. IPTables Match Options Different network protocols provide specialized matching options which can be configured to match a particular packet using that protocol. However, the protocol must first be specified in the iptables command. For example, -p <protocol-name> enables options for the specified protocol. Note that you can also use the protocol ID, instead of the protocol name. Refer to the following examples, each of which have the same effect: Service definitions are provided in the /etc/services file. For readability, it is recommended that you use the service names rather than the port numbers. Warning Secure the /etc/services file to prevent unauthorized editing. If this file is editable, attackers can use it to enable ports on your machine you have otherwise closed. To secure this file, run the following commands as root: This prevents the file from being renamed, deleted or having links made to it. 2.8.9.2.4.1. TCP Protocol These match options are available for the TCP protocol ( -p tcp ): --dport - Sets the destination port for the packet. To configure this option, use a network service name (such as www or smtp); a port number; or a range of port numbers. To specify a range of port numbers, separate the two numbers with a colon ( : ). For example: -p tcp --dport 3000:3200 . The largest acceptable valid range is 0:65535 . Use an exclamation point character ( ! ) after the --dport option to match all packets that do not use that network service or port. To browse the names and aliases of network services and the port numbers they use, view the /etc/services file. The --destination-port match option is synonymous with --dport . --sport - Sets the source port of the packet using the same options as --dport . The --source-port match option is synonymous with --sport . --syn - Applies to all TCP packets designed to initiate communication, commonly called SYN packets . Any packets that carry a data payload are not touched. Use an exclamation point character ( ! ) before the --syn option to match all non-SYN packets. --tcp-flags <tested flag list> <set flag list> - Allows TCP packets that have specific bits (flags) set, to match a rule. The --tcp-flags match option accepts two parameters. The first parameter is the mask; a comma-separated list of flags to be examined in the packet. The second parameter is a comma-separated list of flags that must be set for the rule to match. The possible flags are: ACK FIN PSH RST SYN URG ALL NONE For example, an iptables rule that contains the following specification only matches TCP packets that have the SYN flag set and the ACK and FIN flags not set: --tcp-flags ACK,FIN,SYN SYN Use the exclamation point character ( ! ) after the --tcp-flags to reverse the effect of the match option. --tcp-option - Attempts to match with TCP-specific options that can be set within a particular packet. This match option can also be reversed by using the exclamation point character ( ! ) after the option. 2.8.9.2.4.2. UDP Protocol These match options are available for the UDP protocol ( -p udp ): --dport - Specifies the destination port of the UDP packet, using the service name, port number, or range of port numbers. The --destination-port match option is synonymous with --dport . --sport - Specifies the source port of the UDP packet, using the service name, port number, or range of port numbers. The --source-port match option is synonymous with --sport . For the --dport and --sport options, to specify a range of port numbers, separate the two numbers with a colon (:). For example: -p tcp --dport 3000:3200 . The largest acceptable valid range is 0:65535 . 2.8.9.2.4.3. ICMP Protocol The following match option is available for the Internet Control Message Protocol (ICMP) ( -p icmp ): --icmp-type - Sets the name or number of the ICMP type to match with the rule. A list of valid ICMP names can be retrieved by typing the iptables -p icmp -h command. 2.8.9.2.4.4. Additional Match Option Modules Additional match options are available through modules loaded by the iptables command. To use a match option module, load the module by name using the -m <module-name> , where <module-name> is the name of the module. Many modules are available by default. You can also create modules to provide additional functionality. The following is a partial list of the most commonly used modules: limit module - Places limits on how many packets are matched to a particular rule. When used in conjunction with the LOG target, the limit module can prevent a flood of matching packets from filling up the system log with repetitive messages or using up system resources. Refer to Section 2.8.9.2.5, "Target Options" for more information about the LOG target. The limit module enables the following options: --limit - Sets the maximum number of matches for a particular time period, specified as a <value>/<period> pair. For example, using --limit 5/hour allows five rule matches per hour. Periods can be specified in seconds, minutes, hours, or days. If a number and time modifier are not used, the default value of 3/hour is assumed. --limit-burst - Sets a limit on the number of packets able to match a rule at one time. This option is specified as an integer and should be used in conjunction with the --limit option. If no value is specified, the default value of five (5) is assumed. state module - Enables state matching. The state module enables the following options: --state - match a packet with the following connection states: ESTABLISHED - The matching packet is associated with other packets in an established connection. You need to accept this state if you want to maintain a connection between a client and a server. INVALID - The matching packet cannot be tied to a known connection. NEW - The matching packet is either creating a new connection or is part of a two-way connection not previously seen. You need to accept this state if you want to allow new connections to a service. RELATED - The matching packet is starting a new connection related in some way to an existing connection. An example of this is FTP, which uses one connection for control traffic (port 21), and a separate connection for data transfer (port 20). These connection states can be used in combination with one another by separating them with commas, such as -m state --state INVALID,NEW . mac module - Enables hardware MAC address matching. The mac module enables the following option: --mac-source - Matches a MAC address of the network interface card that sent the packet. To exclude a MAC address from a rule, place an exclamation point character ( ! ) after the --mac-source match option. Refer to the iptables man page for more match options available through modules.
|
[
"~]# iptables -A INPUT -p icmp --icmp-type any -j ACCEPT ~]# iptables -A INPUT -p 5813 --icmp-type any -j ACCEPT",
"~]# chown root.root /etc/services ~]# chmod 0644 /etc/services ~]# chattr +i /etc/services"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Command_Options_for_IPTables-IPTables_Match_Options
|
2.8. Firewalls
|
2.8. Firewalls Information security is commonly thought of as a process and not a product. However, standard security implementations usually employ some form of dedicated mechanism to control access privileges and restrict network resources to users who are authorized, identifiable, and traceable. Red Hat Enterprise Linux includes several tools to assist administrators and security engineers with network-level access control issues. Firewalls are one of the core components of a network security implementation. Several vendors market firewall solutions catering to all levels of the marketplace: from home users protecting one PC to data center solutions safeguarding vital enterprise information. Firewalls can be stand-alone hardware solutions, such as firewall appliances by Cisco, Nokia, and Sonicwall. Vendors such as Checkpoint, McAfee, and Symantec have also developed proprietary software firewall solutions for home and business markets. Apart from the differences between hardware and software firewalls, there are also differences in the way firewalls function that separate one solution from another. Table 2.6, "Firewall Types" details three common types of firewalls and how they function: Table 2.6. Firewall Types Method Description Advantages Disadvantages NAT Network Address Translation (NAT) places private IP subnetworks behind one or a small pool of public IP addresses, masquerading all requests to one source rather than several. The Linux kernel has built-in NAT functionality through the Netfilter kernel subsystem. Can be configured transparently to machines on a LAN. Protection of many machines and services behind one or more external IP addresses simplifies administration duties. Restriction of user access to and from the LAN can be configured by opening and closing ports on the NAT firewall/gateway. Cannot prevent malicious activity once users connect to a service outside of the firewall. Packet Filter A packet filtering firewall reads each data packet that passes through a LAN. It can read and process packets by header information and filters the packet based on sets of programmable rules implemented by the firewall administrator. The Linux kernel has built-in packet filtering functionality through the Netfilter kernel subsystem. Customizable through the iptables front-end utility. Does not require any customization on the client side, as all network activity is filtered at the router level rather than the application level. Since packets are not transmitted through a proxy, network performance is faster due to direct connection from client to remote host. Cannot filter packets for content like proxy firewalls. Processes packets at the protocol layer, but cannot filter packets at an application layer. Complex network architectures can make establishing packet filtering rules difficult, especially if coupled with IP masquerading or local subnets and DMZ networks. Proxy Proxy firewalls filter all requests of a certain protocol or type from LAN clients to a proxy machine, which then makes those requests to the Internet on behalf of the local client. A proxy machine acts as a buffer between malicious remote users and the internal network client machines. Gives administrators control over what applications and protocols function outside of the LAN. Some proxy servers can cache frequently-accessed data locally rather than having to use the Internet connection to request it. This helps to reduce bandwidth consumption. Proxy services can be logged and monitored closely, allowing tighter control over resource utilization on the network. Proxies are often application-specific (HTTP, Telnet, etc.), or protocol-restricted (most proxies work with TCP-connected services only). Application services cannot run behind a proxy, so your application servers must use a separate form of network security. Proxies can become a network bottleneck, as all requests and transmissions are passed through one source rather than directly from a client to a remote service. 2.8.1. Netfilter and IPTables The Linux kernel features a powerful networking subsystem called Netfilter . The Netfilter subsystem provides stateful or stateless packet filtering as well as NAT and IP masquerading services. Netfilter also has the ability to mangle IP header information for advanced routing and connection state management. Netfilter is controlled using the iptables tool. 2.8.1.1. IPTables Overview The power and flexibility of Netfilter is implemented using the iptables administration tool, a command line tool similar in syntax to its predecessor, ipchains , which Netfilter/iptables replaced in the Linux kernel 2.4 and above. iptables uses the Netfilter subsystem to enhance network connection, inspection, and processing. iptables features advanced logging, pre- and post-routing actions, network address translation, and port forwarding, all in one command line interface. This section provides an overview of iptables . For more detailed information, see Section 2.8.9, "IPTables" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-firewalls
|
Chapter 2. Evaluate AMQ Streams
|
Chapter 2. Evaluate AMQ Streams The procedures in this chapter provide a quick way to evaluate the functionality of AMQ Streams. Follow the steps in the order provided to install AMQ Streams, and start sending and receiving messages from a topic: Ensure you have the required prerequisites Install AMQ Streams Create a Kafka cluster Enable authentication for secure access to the Kafka cluster Access the Kafka cluster to send and receive messages Ensure you have the prerequisites and then follow the tasks in the order provided in this chapter. 2.1. Prerequisites An OpenShift Container Platform cluster (4.6 and later) running on which to deploy AMQ Streams must be running. You need to be able to access the AMQ Streams download site . 2.2. Downloading AMQ Streams A ZIP file contains the resources required for installation of AMQ Streams, along with examples for configuration. Procedure Ensure your subscription has been activated and your system is registered. For more information about using the Customer Portal to activate your Red Hat subscription and register your system for packages, see Appendix A, Using your subscription . Download the amq-streams-x.y.z-ocp-install-examples.zip file from the AMQ Streams download site . Unzip the file to any destination. Windows or Mac: Extract the contents of the ZIP archive by double clicking on the ZIP file. Red Hat Enterprise Linux: Open a terminal window in the target machine and navigate to where the ZIP file was downloaded. Extract the ZIP file with this command: unzip amq-streams-x.y.z-ocp-install-examples.zip 2.3. Installing AMQ Streams You install AMQ Streams with the Custom Resource Definitions (CRDs) required for deployment. In this task you create namespaces in the cluster for your deployment. It is good practice to use namespaces to separate functions. Prerequisites Installation requires a user with cluster-admin role, such as system:admin . Procedure Login in to the OpenShift cluster using an account that has cluster admin privileges. For example: oc login -u system:admin Create a new kafka (project) namespace for the AMQ Streams Kafka Cluster Operator. oc new-project kafka Modify the installation files to reference the new kafka namespace where you will install the AMQ Streams Kafka Cluster Operator. Note By default, the files work in the myproject namespace. On Linux, use: sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml On Mac, use: sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml Deploy the CRDs and role-based access control (RBAC) resources to manage the CRDs. oc project kafka oc apply -f install/cluster-operator/ Create a new my-kafka-project namespace where you will deploy your Kafka cluster. oc new-project my-kafka-project Give access to my-kafka-project to a non-admin user developer . For example: oc adm policy add-role-to-user admin developer -n my-kafka-project Set the value of the STRIMZI_NAMESPACE environment variable to give permission to the Cluster Operator to watch the my-kafka-project namespace. oc set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafka oc apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-project oc apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-project oc apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-project The commands create role bindings that grant permission for the Cluster Operator to access the Kafka cluster. Create a new cluster role strimzi-admin . oc apply -f install/strimzi-admin Add the role to the non-admin user developer . oc adm policy add-cluster-role-to-user strimzi-admin developer 2.4. Creating a cluster With AMQ Streams installed, you create a Kafka cluster, then a topic within the cluster. When you create a cluster, the Cluster Operator you deployed when installing AMQ Streams watches for new Kafka resources. Prerequisites For the Kafka cluster, ensure a Cluster Operator is deployed. For the topic, you must have a running Kafka cluster. Procedure Log in to the my-kafka-project namespace as user developer . For example: After new users log in to OpenShift Container Platform, an account is created for that user. Create a new my-cluster Kafka cluster with 3 Zookeeper and 3 broker nodes. Use ephemeral storage Expose the Kafka cluster outside of the OpenShift cluster using an external listener configured to use route . cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: route tls: true storage: type: ephemeral zookeeper: replicas: 3 storage: type: ephemeral entityOperator: topicOperator: {} EOF Wait for the cluster to be deployed: oc wait my-kafka-project/my-cluster --for=condition=Ready --timeout=300s -n kafka When your cluster is ready, create a topic to publish and subscribe from your external client. Create the following my-topic custom resource definition with 3 replicas and 3 partitions in the my-cluster Kafka cluster: cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: "my-cluster" spec: partitions: 3 replicas: 3 EOF 2.5. Accessing the cluster As route is used for external access to the cluster, a cluster CA certificate is required to enable TLS (Transport Layer Security) encryption between the broker and the client. Prerequisites You need a Kafka cluster running within the OpenShift cluster. The Cluster Operator must also be running. Procedure Find the address of the bootstrap route : oc get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{"\n"}' Use the address together with port 443 in your Kafka client as the bootstrap address. Extract the public certificate of the broker certification authority: oc extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt Import the trusted certificate to a truststore: keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt You are now ready to start sending and receiving messages. 2.6. Sending and receiving messages from a topic You can test your AMQ Streams installation by sending and receiving messages outside the cluster from my-topic . Use a terminal to run a Kafka producer and consumer on a local machine. Prerequisites Ensure AMQ Streams is installed on the OpenShift cluster. ZooKeeper and Kafka must be running to be able to send and receive messages. You need a cluster CA certificate for access to the cluster . You must be able to access to the latest version of the Red Hat AMQ Streams archive from the AMQ Streams download site . Procedure Download the latest version of the AMQ Streams archive ( amq-streams-x.y.z-bin.zip ) from the AMQ Streams download site . Unzip the file to any destination. Open a terminal, and start the Kafka console producer with the topic my-topic and the authentication properties for TLS: bin/kafka-console-producer.sh --broker-list ROUTE-ADDRESS :443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=./client.truststore.jks --topic my-topic Type your message into the console where the producer is running. Press Enter to send the message. Open a new terminal tab or window, and start the Kafka console consumer to receive the messages: bin/kafka-console-consumer.sh --bootstrap-server ROUTE-ADDRESS :443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=./client.truststore.jks --topic my-topic --from-beginning Confirm that you see the incoming messages in the consumer console. Press Crtl+C to exit the Kafka console producer and consumer.
|
[
"unzip amq-streams-x.y.z-ocp-install-examples.zip",
"login -u system:admin",
"new-project kafka",
"sed -i 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: kafka/' install/cluster-operator/*RoleBinding*.yaml",
"project kafka apply -f install/cluster-operator/",
"new-project my-kafka-project",
"adm policy add-role-to-user admin developer -n my-kafka-project",
"set env deploy/strimzi-cluster-operator STRIMZI_NAMESPACE=kafka,my-kafka-project -n kafka",
"apply -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n my-kafka-project",
"apply -f install/cluster-operator/032-RoleBinding-strimzi-cluster-operator-topic-operator-delegation.yaml -n my-kafka-project",
"apply -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n my-kafka-project",
"apply -f install/strimzi-admin",
"adm policy add-cluster-role-to-user strimzi-admin developer",
"login -u developer project my-kafka-project",
"cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true - name: external port: 9094 type: route tls: true storage: type: ephemeral zookeeper: replicas: 3 storage: type: ephemeral entityOperator: topicOperator: {} EOF",
"wait my-kafka-project/my-cluster --for=condition=Ready --timeout=300s -n kafka",
"cat << EOF | oc create -f - apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaTopic metadata: name: my-topic labels: strimzi.io/cluster: \"my-cluster\" spec: partitions: 3 replicas: 3 EOF",
"get routes my-cluster-kafka-bootstrap -o=jsonpath='{.status.ingress[0].host}{\"\\n\"}'",
"extract secret/my-cluster-cluster-ca-cert --keys=ca.crt --to=- > ca.crt",
"keytool -keystore client.truststore.jks -alias CARoot -import -file ca.crt",
"bin/kafka-console-producer.sh --broker-list ROUTE-ADDRESS :443 --producer-property security.protocol=SSL --producer-property ssl.truststore.password=password --producer-property ssl.truststore.location=./client.truststore.jks --topic my-topic",
"bin/kafka-console-consumer.sh --bootstrap-server ROUTE-ADDRESS :443 --consumer-property security.protocol=SSL --consumer-property ssl.truststore.password=password --consumer-property ssl.truststore.location=./client.truststore.jks --topic my-topic --from-beginning"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/evaluating_amq_streams_on_openshift/assembly-evaluation-str
|
Chapter 5. Advanced topics
|
Chapter 5. Advanced topics This section covers topics that are beyond the scope of the introductory tutorial but are useful in real-world RPM packaging. 5.1. Signing RPM packages You can sign RPM packages to ensure that no third party can alter their content. To add an additional layer of security, use the HTTPS protocol when downloading the package. You can sign a package by using the --addsign option provided by the rpm-sign package. Prerequisites You have created a GNU Privacy Guard (GPG) key as described in Creating a GPG key . 5.1.1. Creating a GPG key Use the following procedure to create a GNU Privacy Guard (GPG) key required for signing packages. Procedure Generate a GPG key pair: Check the generated key pair: Export the public key: Replace <Key_name> with the real key name that you have selected. Import the exported public key into an RPM database: 5.1.2. Configuring RPM to sign a package To be able to sign an RPM package, you need to specify the %_gpg_name RPM macro. The following procedure describes how to configure RPM for signing a package. Procedure Define the %_gpg_name macro in your USDHOME/.rpmmacros file as follows: Replace Key ID with the GNU Privacy Guard (GPG) key ID that you will use to sign a package. A valid GPG key ID value is either a full name or email address of the user who created the key. 5.1.3. Adding a signature to an RPM package The most usual case is when a package is built without a signature. The signature is added just before the release of the package. To add a signature to an RPM package, use the --addsign option provided by the rpm-sign package. Procedure Add a signature to a package: Replace package-name with the name of an RPM package you want to sign. Note You must enter the password to unlock the secret key for the signature. 5.2. More on macros This section covers selected built-in RPM Macros. For an exhaustive list of such macros, see RPM Documentation . 5.2.1. Defining your own macros The following section describes how to create a custom macro. Procedure Include the following line in the RPM spec file: All whitespace surrounding <body> is removed. Name may be composed of alphanumeric characters, and the character _ and must be at least 3 characters in length. Inclusion of the (opts) field is optional: Simple macros do not contain the (opts) field. In this case, only recursive macro expansion is performed. Parametrized macros contain the (opts) field. The opts string between parentheses is passed to getopt(3) for argc/argv processing at the beginning of a macro invocation. Note Older RPM spec files use the %define <name> <body> macro pattern instead. The differences between %define and %global macros are as follows: %define has local scope. It applies to a specific part of a spec file. The body of a %define macro is expanded when used. %global has global scope. It applies to an entire spec file. The body of a %global macro is expanded at definition time. Important Macros are evaluated even if they are commented out or the name of the macro is given into the %changelog section of the spec file. To comment out a macro, use %% . For example: %%global . Additional resources Macro syntax 5.2.2. Using the %setup macro This section describes how to build packages with source code tarballs using different variants of the %setup macro. Note that the macro variants can be combined. The rpmbuild output illustrates standard behavior of the %setup macro. At the beginning of each phase, the macro outputs Executing(%... ) , as shown in the below example. Example 5.1. Example %setup macro output The shell output is set with set -x enabled. To see the content of /var/tmp/rpm-tmp.DhddsG , use the --debug option because rpmbuild deletes temporary files after a successful build. This displays the setup of environment variables followed by for example: The %setup macro: Ensures that we are working in the correct directory. Removes residues of builds. Unpacks the source tarball. Sets up some default privileges. 5.2.2.1. Using the %setup -q macro The -q option limits the verbosity of the %setup macro. Only tar -xof is executed instead of tar -xvvof . Use this option as the first option. 5.2.2.2. Using the %setup -n macro The -n option is used to specify the name of the directory from expanded tarball. This is used in cases when the directory from expanded tarball has a different name from what is expected ( %{name}-%{version} ), which can lead to an error of the %setup macro. For example, if the package name is cello , but the source code is archived in hello-1.0.tgz and contains the hello/ directory, the spec file content needs to be as follows: 5.2.2.3. Using the %setup -c macro The -c option is used if the source code tarball does not contain any subdirectories and after unpacking, files from an archive fills the current directory. The -c option then creates the directory and steps into the archive expansion as shown below: The directory is not changed after archive expansion. 5.2.2.4. Using the %setup -D and %setup -T macros The -D option disables deleting of source code directory, and is particularly useful if the %setup macro is used several times. With the -D option, the following lines are not used: The -T option disables expansion of the source code tarball by removing the following line from the script: 5.2.2.5. Using the %setup -a and %setup -b macros The -a and -b options expand specific sources: The -b option stands for before . This option expands specific sources before entering the working directory. The -a option stands for after . This option expands those sources after entering. Their arguments are source numbers from the spec file preamble. In the following example, the cello-1.0.tar.gz archive contains an empty examples directory. The examples are shipped in a separate examples.tar.gz tarball and they expand into the directory of the same name. In this case, use -a 1 if you want to expand Source1 after entering the working directory: In the following example, examples are provided in a separate cello-1.0-examples.tar.gz tarball, which expands into cello-1.0/examples . In this case, use -b 1 to expand Source1 before entering the working directory: 5.2.3. Common RPM macros in the %files section The following table lists advanced RPM Macros that are needed in the %files section of a spec file. Table 5.1. Advanced RPM Macros in the %files section Macro Definition %license The %license macro identifies the file listed as a LICENSE file and it will be installed and labeled as such by RPM. Example: %license LICENSE . %doc The %doc macro identifies a file listed as documentation and it will be installed and labeled as such by RPM. The %doc macro is used for documentation about the packaged software and also for code examples and various accompanying items. If code examples are included, care must be taken to remove executable mode from the file. Example: %doc README %dir The %dir macro ensures that the path is a directory owned by this RPM. This is important so that the RPM file manifest accurately knows what directories to clean up on uninstall. Example: %dir %{_libdir}/%{name} %config(noreplace) The %config(noreplace) macro ensures that the following file is a configuration file and therefore should not be overwritten (or replaced) on a package install or update if the file has been modified from the original installation checksum. If there is a change, the file will be created with .rpmnew appended to the end of the filename upon upgrade or install so that the pre-existing or modified file on the target system is not modified. Example: %config(noreplace) %{_sysconfdir}/%{name}/%{name}.conf 5.2.4. Displaying the built-in macros Red Hat Enterprise Linux provides multiple built-in RPM macros. Procedure To display all built-in RPM macros, run: Note The output is quite sizeable. To narrow the result, use the command above with the grep command. To find information about the RPMs macros for your system's version of RPM, run: Note RPM macros are the files titled macros in the output directory structure. 5.2.5. RPM distribution macros Different distributions provide different sets of recommended RPM macros based on the language implementation of the software being packaged or the specific guidelines of the distribution. The sets of recommended RPM macros are often provided as RPM packages, ready to be installed with the dnf package manager. Once installed, the macro files can be found in the /usr/lib/rpm/macros.d/ directory. Procedure To display the raw RPM macro definitions, run: The above output displays the raw RPM macro definitions. To determine what a macro does and how it can be helpful when packaging RPMs, run the rpm --eval command with the name of the macro used as its argument: Additional resources rpm man page 5.2.6. Creating custom macros You can override the distribution macros in the ~/.rpmmacros file with your custom macros. Any changes that you make affect every build on your machine. Warning Defining any new macros in the ~/.rpmmacros file is not recommended. Such macros would not be present on other machines, where users may want to try to rebuild your package. Procedure To override a macro, run: You can create the directory from the example above, including all subdirectories through the rpmdev-setuptree utility. The value of this macro is by default ~/rpmbuild . The macro above is often used to pass to Makefile, for example make %{?_smp_mflags} , and to set a number of concurrent processes during the build phase. By default, it is set to -jX , where X is a number of cores. If you alter the number of cores, you can speed up or slow down a build of packages. 5.3. Epoch, Scriptlets and Triggers This section covers Epoch , Scriptlets , and Triggers , which represent advanced directives for RMP spec files. All these directives influence not only the spec file, but also the end machine on which the resulting RPM is installed. 5.3.1. The Epoch directive The Epoch directive enables to define weighted dependencies based on version numbers. If this directive is not listed in the RPM spec file, the Epoch directive is not set at all. This is contrary to common belief that not setting Epoch results in an Epoch of 0. However, the dnf utility treats an unset Epoch as the same as an Epoch of 0 for the purposes of depsolving. However, listing Epoch in a spec file is usually omitted because in majority of cases introducing an Epoch value skews the expected RPM behavior when comparing versions of packages. Example 5.2. Using Epoch If you install the foobar package with Epoch: 1 and Version: 1.0 , and someone else packages foobar with Version: 2.0 but without the Epoch directive, the new version will never be considered an update. The reason being that the Epoch version is preferred over the traditional Name-Version-Release marker that signifies versioning for RPM Packages. Using of Epoch is thus quite rare. However, Epoch is typically used to resolve an upgrade ordering issue. The issue can appear as a side effect of upstream change in software version number schemes or versions incorporating alphabetical characters that cannot always be compared reliably based on encoding. 5.3.2. Scriptlets directives Scriptlets are a series of RPM directives that are executed before or after packages are installed or deleted. Use Scriptlets only for tasks that cannot be done at build time or in an start up script. A set of common Scriptlet directives exists. They are similar to the spec file section headers, such as %build or %install . They are defined by multi-line segments of code, which are often written as a standard POSIX shell script. However, they can also be written in other programming languages that RPM for the target machine's distribution accepts. RPM Documentation includes an exhaustive list of available languages. The following table includes Scriptlet directives listed in their execution order. Note that a package containing the scripts is installed between the %pre and %post directive, and it is uninstalled between the %preun and %postun directive. Table 5.2. Scriptlet directives Directive Definition %pretrans Scriptlet that is executed just before installing or removing any package. %pre Scriptlet that is executed just before installing the package on the target system. %post Scriptlet that is executed just after the package was installed on the target system. %preun Scriptlet that is executed just before uninstalling the package from the target system. %postun Scriptlet that is executed just after the package was uninstalled from the target system. %posttrans Scriptlet that is executed at the end of the transaction. 5.3.3. Turning off a scriptlet execution The following procedure describes how to turn off the execution of any scriptlet using the rpm command together with the --no_scriptlet_name_ option. Procedure For example, to turn off the execution of the %pretrans scriptlets, run: You can also use the -- noscripts option, which is equivalent to all of the following: --nopre --nopost --nopreun --nopostun --nopretrans --noposttrans Additional resources rpm(8) man page. 5.3.4. Scriptlets macros The Scriptlets directives also work with RPM macros. The following example shows the use of systemd scriptlet macro, which ensures that systemd is notified about a new unit file. 5.3.5. The Triggers directives Triggers are RPM directives which provide a method for interaction during package installation and uninstallation. Warning Triggers may be executed at an unexpected time, for example on update of the containing package. Triggers are difficult to debug, therefore they need to be implemented in a robust way so that they do not break anything when executed unexpectedly. For these reasons, Red Hat recommends to minimize the use of Triggers . The order of execution on a single package upgrade and the details for each existing Triggers are listed below: The above items are found in the /usr/share/doc/rpm-4.*/triggers file. 5.3.6. Using non-shell scripts in a spec file The -p scriptlet option in a spec file enables the user to invoke a specific interpreter instead of the default shell scripts interpreter ( -p /bin/sh ). The following procedure describes how to create a script, which prints out a message after installation of the pello.py program: Procedure Open the pello.spec file. Find the following line: Under the above line, insert: Build your package as described in Building RPMs . Install your package: Check the output message after the installation: Note To use a Python 3 script, include the following line under install -m in a spec file: To use a Lua script, include the following line under install -m in a SPEC file: This way, you can specify any interpreter in a spec file. 5.4. RPM conditionals RPM Conditionals enable conditional inclusion of various sections of the spec file. Conditional inclusions usually deal with: Architecture-specific sections Operating system-specific sections Compatibility issues between various versions of operating systems Existence and definition of macros 5.4.1. RPM conditionals syntax RPM conditionals use the following syntax: If expression is true, then do some action: If expression is true, then do some action, in other case, do another action: 5.4.2. The %if conditionals The following examples shows the usage of %if RPM conditionals. Example 5.3. Using the %if conditional to handle compatibility between Red Hat Enterprise Linux 8 and other operating systems This conditional handles compatibility between RHEL 8 and other operating systems in terms of support of the AS_FUNCTION_DESCRIBE macro. If the package is built for RHEL, the %rhel macro is defined, and it is expanded to RHEL version. If its value is 8, meaning the package is build for RHEL 8, then the references to AS_FUNCTION_DESCRIBE, which is not supported by RHEL 8, are deleted from autoconfig scripts. Example 5.4. Using the %if conditional to handle definition of macros This conditional handles definition of macros. If the %milestone or the %revision macros are set, the %ruby_archive macro, which defines the name of the upstream tarball, is redefined. 5.4.3. Specialized variants of %if conditionals The %ifarch conditional, %ifnarch conditional and %ifos conditional are specialized variants of the %if conditionals. These variants are commonly used, hence they have their own macros. The %ifarch conditional The %ifarch conditional is used to begin a block of the spec file that is architecture-specific. It is followed by one or more architecture specifiers, each separated by commas or whitespace. Example 5.5. An example use of the %ifarch conditional All the contents of the spec file between %ifarch and %endif are processed only on the 32-bit AMD and Intel architectures or Sun SPARC-based systems. The %ifnarch conditional The %ifnarch conditional has a reverse logic than %ifarch conditional. Example 5.6. An example use of the %ifnarch conditional All the contents of the spec file between %ifnarch and %endif are processed only if not done on a Digital Alpha/AXP-based system. The %ifos conditional The %ifos conditional is used to control processing based on the operating system of the build. It can be followed by one or more operating system names. Example 5.7. An example use of the %ifos conditional All the contents of the spec file between %ifos and %endif are processed only if the build was done on a Linux system. 5.5. Packaging Python 3 RPMs You can install Python packages on your system either from the upstream PyPI repository using the pip installer, or using the DNF package manager. DNF uses the RPM package format, which offers more downstream control over the software. The packaging format of native Python packages is defined by Python Packaging Authority (PyPA) Specifications . Most Python projects use the distutils or setuptools utilities for packaging, and defined package information in the setup.py file. However, possibilities of creating native Python packages have evolved over time. For more information about emerging packaging standards, see pyproject-rpm-macros . This chapter describes how to package a Python project that uses setup.py into an RPM package. This approach provides the following advantages compared to native Python packages: Dependencies on Python and non-Python packages are possible and strictly enforced by the DNF package manager. You can cryptographically sign the packages. With cryptographic signing, you can verify, integrate, and test content of RPM packages with the rest of the operating system. You can execute tests during the build process. 5.5.1. SPEC file description for a Python package A SPEC file contains instructions that the rpmbuild utility uses to build an RPM. The instructions are included in a series of sections. A SPEC file has two main parts in which the sections are defined: Preamble (contains a series of metadata items that are used in the Body) Body (contains the main part of the instructions) An RPM SPEC file for Python projects has some specifics compared to non-Python RPM SPEC files. Important A name of any RPM package of a Python library must always include the python3- , python3.11- , or python3.12- prefix. Other specifics are shown in the following SPEC file example for the python3*-pello package. For description of such specifics, see the notes below the example. An example spec file for the pello program written in Python %global python3_pkgversion 3.11 1 Name: python-pello 2 Version: 1.0.2 Release: 1%{?dist} Summary: Example Python library License: MIT URL: https://github.com/fedora-python/Pello Source: %{url}/archive/v%{version}/Pello-%{version}.tar.gz BuildArch: noarch BuildRequires: python%{python3_pkgversion}-devel 3 # Build dependencies needed to be specified manually BuildRequires: python%{python3_pkgversion}-setuptools # Test dependencies needed to be specified manually # Also runtime dependencies need to be BuildRequired manually to run tests during build BuildRequires: python%{python3_pkgversion}-pytest >= 3 %global _description %{expand: Pello is an example package with an executable that prints Hello World! on the command line.} %description %_description %package -n python%{python3_pkgversion}-pello 4 Summary: %{summary} %description -n python%{python3_pkgversion}-pello %_description %prep %autosetup -p1 -n Pello-%{version} %build # The macro only supported projects with setup.py %py3_build 5 %install # The macro only supported projects with setup.py %py3_install %check 6 %{pytest} # Note that there is no %%files section for the unversioned python module %files -n python%{python3_pkgversion}-pello %doc README.md %license LICENSE.txt %{_bindir}/pello_greeting # The library files needed to be listed manually %{python3_sitelib}/pello/ # The metadata files needed to be listed manually %{python3_sitelib}/Pello-*.egg-info/ 1 By defining the python3_pkgversion macro, you set which Python version this package will be built for. To build for the default Python version 3.9, either set the macro to its default value 3 or remove the line entirely. 2 When packaging a Python project into RPM, always add the python- prefix to the original name of the project. The original name here is pello and, therefore, the name of the Source RPM (SRPM) is python-pello . 3 The BuildRequires directive specifies what packages are required to build and test this package. In BuildRequires , always include items providing tools necessary for building Python packages: python3-devel (or python3.11-devel or python3.12-devel ) and the relevant projects needed by the specific software that you package, for example, python3-setuptools (or python3.11-setuptools or python3.12-setuptools ) or the runtime and testing dependencies needed to run the tests in the %check section. 4 When choosing a name for the binary RPM (the package that users will be able to install), add a versioned Python prefix. Use the python3- prefix for the default Python 3.9, the python3.11- prefix for Python 3.11, or the python3.12- prefix for Python 3.12. You can use the %{python3_pkgversion} macro, which evaluates to 3 for the default Python version 3.9 unless you set it to an explicit version, for example, 3.11 (see footnote 1). 5 The %py3_build and %py3_install macros run the setup.py build and setup.py install commands, respectively, with additional arguments to specify installation locations, the interpreter to use, and other details. 6 The %check section should run the tests of the packaged project. The exact command depends on the project itself, but it is possible to use the %pytest macro to run the pytest command in an RPM-friendly way. 5.5.2. Common macros for Python 3 RPMs In a SPEC file, always use the macros that are described in the following Macros for Python 3 RPMs table rather than hardcoding their values. You can redefine which Python 3 version is used in these macros by defining the python3_pkgversion macro on top of your SPEC file (see Section 5.5.1, "SPEC file description for a Python package" ). If you define the python3_pkgversion macro, the values of the macros described in the following table will reflect the specified Python 3 version. Table 5.3. Macros for Python 3 RPMs Macro Normal Definition Description %{python3_pkgversion} 3 The Python version that is used by all other macros. Can be redefined to 3.11 to use Python 3.11, or to 3.12 to use Python 3.12 %{python3} /usr/bin/python3 The Python 3 interpreter %{python3_version} 3.9 The major.minor version of the Python 3 interpreter %{python3_sitelib} /usr/lib/python3.9/site-packages The location where pure-Python modules are installed %{python3_sitearch} /usr/lib64/python3.9/site-packages The location where modules containing architecture-specific extension modules are installed %py3_build Runs the setup.py build command with arguments suitable for an RPM package %py3_install Runs the setup.py install command with arguments suitable for an RPM package %{py3_shebang_flags} s The default set of flags for the Python interpreter directives macro, %py3_shebang_fix %py3_shebang_fix Changes Python interpreter directives to #! %{python3} , preserves any existing flags (if found), and adds flags defined in the %{py3_shebang_flags} macro Additional resources Python macros in upstream documentation 5.5.3. Using automatically generated dependencies for Python RPMs The following procedure describes how to use automatically generated dependencies when packaging a Python project as an RPM. Prerequisites A SPEC file for the RPM exists. For more information, see SPEC file description for a Python package . Procedure Make sure that one of the following directories containing upstream-provided metadata is included in the resulting RPM: .dist-info .egg-info The RPM build process automatically generates virtual pythonX.Ydist provides from these directories, for example: The Python dependency generator then reads the upstream metadata and generates runtime requirements for each RPM package using the generated pythonX.Ydist virtual provides. For example, a generated requirements tag might look as follows: Inspect the generated requires. To remove some of the generated requires, use one of the following approaches: Modify the upstream-provided metadata in the %prep section of the SPEC file. Use automatic filtering of dependencies described in the upstream documentation . To disable the automatic dependency generator, include the %{?python_disable_dependency_generator} macro above the main package's %description declaration. Additional resources Automatically generated dependencies 5.6. Handling interpreter directives in Python scripts In Red Hat Enterprise Linux 9, executable Python scripts are expected to use interpreter directives (also known as hashbangs or shebangs) that explicitly specify at a minimum the major Python version. For example: The /usr/lib/rpm/redhat/brp-mangle-shebangs buildroot policy (BRP) script is run automatically when building any RPM package, and attempts to correct interpreter directives in all executable files. The BRP script generates errors when encountering a Python script with an ambiguous interpreter directive, such as: or 5.6.1. Modifying interpreter directives in Python scripts Use the following procedure to modify interpreter directives in Python scripts that cause build errors at RPM build time. Prerequisites Some of the interpreter directives in your Python scripts cause a build error. Procedure To modify interpreter directives, complete one of the following tasks: Use the following macro in the %prep section of your SPEC file: SCRIPTNAME can be any file, directory, or a list of files and directories. As a result, all listed files and all .py files in listed directories will have their interpreter directives modified to point to %{python3} . Existing flags from the original interpreter directive will be preserved and additional flags defined in the %{py3_shebang_flags} macro will be added. You can redefine the %{py3_shebang_flags} macro in your SPEC file to change the flags that will be added. Apply the pathfix.py script from the python3-devel package: You can specify multiple paths. If a PATH is a directory, pathfix.py recursively scans for any Python scripts matching the pattern ^[a-zA-Z0-9_]+\.pyUSD , not only those with an ambiguous interpreter directive. Add the command above to the %prep section or at the end of the %install section. Modify the packaged Python scripts so that they conform to the expected format. For this purpose, you can use the pathfix.py script outside the RPM build process, too. When running pathfix.py outside an RPM build, replace %{python3} from the preceding example with a path for the interpreter directive, such as /usr/bin/python3 or /usr/bin/python3.11 . Additional resources Interpreter invocation 5.7. RubyGems packages This section explains what RubyGems packages are, and how to re-package them into RPM. 5.7.1. What RubyGems are Ruby is a dynamic, interpreted, reflective, object-oriented, general-purpose programming language. Programs written in Ruby are typically packaged using the RubyGems project, which provides a specific Ruby packaging format. Packages created by RubyGems are called gems, and they can be re-packaged into RPM as well. Note This documentation refers to terms related to the RubyGems concept with the gem prefix, for example .gemspec is used for the gem specification , and terms related to RPM are unqualified. 5.7.2. How RubyGems relate to RPM RubyGems represent Ruby's own packaging format. However, RubyGems contain metadata similar to those needed by RPM, which enables the conversion from RubyGems to RPM. According to Ruby Packaging Guidelines , it is possible to re-package RubyGems packages into RPM in this way: Such RPMs fit with the rest of the distribution. End users are able to satisfy dependencies of a gem by installing the appropriate RPM-packaged gem. RubyGems use similar terminology as RPM, such as spec files, package names, dependencies and other items. To fit into the rest of RHEL RPM distribution, packages created by RubyGems must follow the conventions listed below: Names of gems must follow this pattern: To implement a shebang line, the following string must be used: 5.7.3. Creating RPM packages from RubyGems packages To create a source RPM for a RubyGems package, the following files are needed: A gem file An RPM spec file The following sections describe how to create RPM packages from packages created by RubyGems. 5.7.3.1. RubyGems spec file conventions A RubyGems spec file must meet the following conventions: Contain a definition of %{gem_name} , which is the name from the gem's specification. The source of the package must be the full URL to the released gem archive; the version of the package must be the gem's version. Contain the BuildRequires: a directive defined as follows to be able to pull in the macros needed to build. Not contain any RubyGems Requires or Provides , because those are autogenerated. Not contain the BuildRequires: directive defined as follows, unless you want to explicitly specify Ruby version compatibility: The automatically generated dependency on RubyGems ( Requires: ruby(rubygems) ) is sufficient. 5.7.3.2. RubyGems macros The following table lists macros useful for packages created by RubyGems. These macros are provided by the rubygems-devel packages. Table 5.4. RubyGems' macros Macro name Extended path Usage %{gem_dir} /usr/share/gems Top directory for the gem structure. %{gem_instdir} %{gem_dir}/gems/%{gem_name}-%{version} Directory with the actual content of the gem. %{gem_libdir} %{gem_instdir}/lib The library directory of the gem. %{gem_cache} %{gem_dir}/cache/%{gem_name}-%{version}.gem The cached gem. %{gem_spec} %{gem_dir}/specifications/%{gem_name}-%{version}.gemspec The gem specification file. %{gem_docdir} %{gem_dir}/doc/%{gem_name}-%{version} The RDoc documentation of the gem. %{gem_extdir_mri} %{_libdir}/gems/ruby/%{gem_name}-%{version} The directory for gem extension. 5.7.3.3. RubyGems spec file example Example spec file for building gems together with an explanation of its particular sections follows. An example RubyGems spec file The following table explains the specifics of particular items in a RubyGems spec file: Table 5.5. RubyGems' spec directives specifics Directive RubyGems specifics %prep RPM can directly unpack gem archives, so you can run the gem unpack comamnd to extract the source from the gem. The %setup -n %{gem_name}-%{version} macro provides the directory into which the gem has been unpacked. At the same directory level, the %{gem_name}-%{version}.gemspec file is automatically created, which can be used to rebuild the gem later, to modify the .gemspec , or to apply patches to the code. %build This directive includes commands or series of commands for building the software into machine code. The %gem_install macro operates only on gem archives, and the gem is recreated with the gem build. The gem file that is created is then used by %gem_install to build and install the code into the temporary directory, which is ./%{gem_dir} by default. The %gem_install macro both builds and installs the code in one step. Before being installed, the built sources are placed into a temporary directory that is created automatically. The %gem_install macro accepts two additional options: -n <gem_file> , which allows to override gem used for installation, and -d <install_dir> , which might override the gem installation destination; using this option is not recommended. The %gem_install macro must not be used to install into the %{buildroot} . %install The installation is performed into the %{buildroot} hierarchy. You can create the directories that you need and then copy what was installed in the temporary directories into the %{buildroot} hierarchy. If this gem creates shared objects, they are moved into the architecture-specific %{gem_extdir_mri} path. Additional resources Ruby Packaging Guidelines 5.7.3.4. Converting RubyGems packages to RPM spec files with gem2rpm The gem2rpm utility converts RubyGems packages to RPM spec files. The following sections describe how to: Install the gem2rpm utility Display all gem2rpm options Use gem2rpm to convert RubyGems packages to RPM spec files Edit gem2rpm templates 5.7.3.4.1. Installing gem2rpm The following procedure describes how to install the gem2rpm utility. Procedure To install gem2rpm from RubyGems.org , run: 5.7.3.4.2. Displaying all options of gem2rpm The following procedure describes how to display all options of the gem2rpm utility. Procedure To see all options of gem2rpm , run: 5.7.3.4.3. Using gem2rpm to convert RubyGems packages to RPM spec files The following procedure describes how to use the gem2rpm utility to convert RubyGems packages to RPM spec files. Procedure Download a gem in its latest version, and generate the RPM spec file for this gem: The described procedure creates an RPM spec file based on the information provided in the gem's metadata. However, the gem misses some important information that is usually provided in RPMs, such as the license and the changelog. The generated spec file thus needs to be edited. 5.7.3.4.4. gem2rpm templates The gem2rpm template is a standard Embedded Ruby (ERB) file, which includes variables listed in the following table. Table 5.6. Variables in the gem2rpm template Variable Explanation package The Gem::Package variable for the gem. spec The Gem::Specification variable for the gem (the same as format.spec). config The Gem2Rpm::Configuration variable that can redefine default macros or rules used in spec template helpers. runtime_dependencies The Gem2Rpm::RpmDependencyList variable providing a list of package runtime dependencies. development_dependencies The Gem2Rpm::RpmDependencyList variable providing a list of package development dependencies. tests The Gem2Rpm::TestSuite variable providing a list of test frameworks allowing their execution. files The Gem2Rpm::RpmFileList variable providing an unfiltered list of files in a package. main_files The Gem2Rpm::RpmFileList variable providing a list of files suitable for the main package. doc_files The Gem2Rpm::RpmFileList variable providing a list of files suitable for the -doc subpackage. format The Gem::Format variable for the gem. Note that this variable is now deprecated. 5.7.3.4.5. Listing available gem2rpm templates Use the following procedure describes to list all available gem2rpm templates. Procedure To see all available templates, run: 5.7.3.4.6. Editing gem2rpm templates You can edit the template from which the RPM spec file is generated instead of editing the generated spec file. Use the following procedure to edit the gem2rpm templates. Procedure Save the default template: Edit the template as needed. Generate the spec file by using the edited template: You can now build an RPM package by using the edited template as described in Building RPMs . 5.8. How to handle RPM packages with Perls scripts Since RHEL 8, the Perl programming language is not included in the default buildroot. Therefore, the RPM packages that include Perl scripts must explicitly indicate the dependency on Perl using the BuildRequires: directive in RPM spec file. 5.8.1. Common Perl-related dependencies The most frequently occurring Perl-related build dependencies used in BuildRequires: are : perl-generators Automatically generates run-time Requires and Provides for installed Perl files. When you install a Perl script or a Perl module, you must include a build dependency on this package. perl-interpreter The Perl interpreter must be listed as a build dependency if it is called in any way, either explicitly via the perl package or the %__perl macro, or as a part of your package's build system. perl-devel Provides Perl header files. If building architecture-specific code which links to the libperl.so library, such as an XS Perl module, you must include BuildRequires: perl-devel . 5.8.2. Using a specific Perl module If a specific Perl module is required at build time, use the following procedure: Procedure Apply the following syntax in your RPM spec file: Note Apply this syntax to Perl core modules as well, because they can move in and out of the perl package over time. 5.8.3. Limiting a package to a specific Perl version To limit your package to a specific Perl version, follow this procedure: Procedure Use the perl(:VERSION) dependency with the desired version constraint in your RPM spec file: For example, to limit a package to Perl version 5.30 and higher, use: Warning Do not use a comparison against the version of the perl package because it includes an epoch number. 5.8.4. Ensuring that a package uses the correct Perl interpreter Red Hat provides multiple Perl interpreters, which are not fully compatible. Therefore, any package that delivers a Perl module must use at run time the same Perl interpreter that was used at build time. To ensure this, follow the procedure below: Procedure Include versioned MODULE_COMPAT Requires in RPM spec file for any package that delivers a Perl module:
|
[
"gpg --gen-key",
"gpg --list-keys",
"gpg --export -a '<Key_name>' > RPM-GPG-KEY-pmanager",
"rpm --import RPM-GPG-KEY-pmanager",
"%_gpg_name Key ID",
"rpm --addsign package-name .rpm",
"%global <name>[(opts)] <body>",
"Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.DhddsG",
"cd '/builddir/build/BUILD' rm -rf 'cello-1.0' /usr/bin/gzip -dc '/builddir/build/SOURCES/cello-1.0.tar.gz' | /usr/bin/tar -xof - STATUS=USD? if [ USDSTATUS -ne 0 ]; then exit USDSTATUS fi cd 'cello-1.0' /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w .",
"Name: cello Source0: https://example.com/%{name}/release/hello-%{version}.tar.gz ... %prep %setup -n hello",
"/usr/bin/mkdir -p cello-1.0 cd 'cello-1.0'",
"rm -rf 'cello-1.0'",
"/usr/bin/gzip -dc '/builddir/build/SOURCES/cello-1.0.tar.gz' | /usr/bin/tar -xvvof -",
"Source0: https://example.com/%{name}/release/%{name}-%{version}.tar.gz Source1: examples.tar.gz ... %prep %setup -a 1",
"Source0: https://example.com/%{name}/release/%{name}-%{version}.tar.gz Source1: %{name}-%{version}-examples.tar.gz ... %prep %setup -b 1",
"--showrc",
"-ql rpm",
"--showrc",
"--eval %{_MACRO}",
"%_topdir /opt/some/working/directory/rpmbuild",
"%_smp_mflags -l3",
"rpm --nopretrans",
"rpm --showrc | grep systemd -14: __transaction_systemd_inhibit %{__plugindir}/systemd_inhibit.so -14: _journalcatalogdir /usr/lib/systemd/catalog -14: _presetdir /usr/lib/systemd/system-preset -14: _unitdir /usr/lib/systemd/system -14: _userunitdir /usr/lib/systemd/user /usr/lib/systemd/systemd-binfmt %{?*} >/dev/null 2>&1 || : /usr/lib/systemd/systemd-sysctl %{?*} >/dev/null 2>&1 || : -14: systemd_post -14: systemd_postun -14: systemd_postun_with_restart -14: systemd_preun -14: systemd_requires Requires(post): systemd Requires(preun): systemd Requires(postun): systemd -14: systemd_user_post %systemd_post --user --global %{?*} -14: systemd_user_postun %{nil} -14: systemd_user_postun_with_restart %{nil} -14: systemd_user_preun systemd-sysusers %{?*} >/dev/null 2>&1 || : echo %{?*} | systemd-sysusers - >/dev/null 2>&1 || : systemd-tmpfiles --create %{?*} >/dev/null 2>&1 || : rpm --eval %{systemd_post} if [ USD1 -eq 1 ] ; then # Initial installation systemctl preset >/dev/null 2>&1 || : fi rpm --eval %{systemd_postun} systemctl daemon-reload >/dev/null 2>&1 || : rpm --eval %{systemd_preun} if [ USD1 -eq 0 ] ; then # Package removal, not upgrade systemctl --no-reload disable > /dev/null 2>&1 || : systemctl stop > /dev/null 2>&1 || : fi",
"all-%pretrans ... any-%triggerprein (%triggerprein from other packages set off by new install) new-%triggerprein new-%pre for new version of package being installed ... (all new files are installed) new-%post for new version of package being installed any-%triggerin (%triggerin from other packages set off by new install) new-%triggerin old-%triggerun any-%triggerun (%triggerun from other packages set off by old uninstall) old-%preun for old version of package being removed ... (all old files are removed) old-%postun for old version of package being removed old-%triggerpostun any-%triggerpostun (%triggerpostun from other packages set off by old un install) ... all-%posttrans",
"install -m 0644 %{name}.py* %{buildroot}/usr/lib/%{name}/",
"%post -p /usr/bin/python3 print(\"This is {} code\".format(\"python\"))",
"dnf install /home/<username>/rpmbuild/RPMS/noarch/pello-0.1.2-1.el8.noarch.rpm",
"Installing : pello-0.1.2-1.el8.noarch 1/1 Running scriptlet: pello-0.1.2-1.el8.noarch 1/1 This is python code",
"%post -p /usr/bin/python3",
"%post -p <lua>",
"%if expression ... %endif",
"%if expression ... %else ... %endif",
"%if 0%{?rhel} == 8 sed -i '/AS_FUNCTION_DESCRIBE/ s/^/#/' configure.in sed -i '/AS_FUNCTION_DESCRIBE/ s/^/#/' acinclude.m4 %endif",
"%define ruby_archive %{name}-%{ruby_version} %if 0%{?milestone:1}%{?revision:1} != 0 %define ruby_archive %{ruby_archive}-%{?milestone}%{?!milestone:%{?revision:r%{revision}}} %endif",
"%ifarch i386 sparc ... %endif",
"%ifnarch alpha ... %endif",
"%ifos linux ... %endif",
"%global python3_pkgversion 3.11 1 Name: python-pello 2 Version: 1.0.2 Release: 1%{?dist} Summary: Example Python library License: MIT URL: https://github.com/fedora-python/Pello Source: %{url}/archive/v%{version}/Pello-%{version}.tar.gz BuildArch: noarch BuildRequires: python%{python3_pkgversion}-devel 3 Build dependencies needed to be specified manually BuildRequires: python%{python3_pkgversion}-setuptools Test dependencies needed to be specified manually Also runtime dependencies need to be BuildRequired manually to run tests during build BuildRequires: python%{python3_pkgversion}-pytest >= 3 %global _description %{expand: Pello is an example package with an executable that prints Hello World! on the command line.} %description %_description %package -n python%{python3_pkgversion}-pello 4 Summary: %{summary} %description -n python%{python3_pkgversion}-pello %_description %prep %autosetup -p1 -n Pello-%{version} %build The macro only supported projects with setup.py %py3_build 5 %install The macro only supported projects with setup.py %py3_install %check 6 %{pytest} Note that there is no %%files section for the unversioned python module %files -n python%{python3_pkgversion}-pello %doc README.md %license LICENSE.txt %{_bindir}/pello_greeting The library files needed to be listed manually %{python3_sitelib}/pello/ The metadata files needed to be listed manually %{python3_sitelib}/Pello-*.egg-info/",
"python3.9dist(pello)",
"Requires: python3.9dist(requests)",
"#!/usr/bin/python3 #!/usr/bin/python3.9 #!/usr/bin/python3.11 #!/usr/bin/python3.12",
"#!/usr/bin/python",
"#!/usr/bin/env python",
"%py3_shebang_fix SCRIPTNAME ...",
"pathfix.py -pn -i %{python3} PATH ...",
"rubygem-%{gem_name}",
"#!/usr/bin/ruby",
"BuildRequires:rubygems-devel",
"Requires: ruby(release)",
"%prep %setup -q -n %{gem_name}-%{version} Modify the gemspec if necessary Also apply patches to code if necessary %patch0 -p1 %build Create the gem as gem install only works on a gem file gem build ../%{gem_name}-%{version}.gemspec %%gem_install compiles any C extensions and installs the gem into ./%%gem_dir by default, so that we can move it into the buildroot in %%install %gem_install %install mkdir -p %{buildroot}%{gem_dir} cp -a ./%{gem_dir}/* %{buildroot}%{gem_dir}/ If there were programs installed: mkdir -p %{buildroot}%{_bindir} cp -a ./%{_bindir}/* %{buildroot}%{_bindir} If there are C extensions, copy them to the extdir. mkdir -p %{buildroot}%{gem_extdir_mri} cp -a .%{gem_extdir_mri}/{gem.build_complete,*.so} %{buildroot}%{gem_extdir_mri}/",
"gem install gem2rpm",
"gem2rpm --help",
"gem2rpm --fetch <gem_name> > <gem_name>.spec",
"gem2rpm --templates",
"gem2rpm -T > rubygem-<gem_name>.spec.template",
"gem2rpm -t rubygem-<gem_name>.spec.template <gem_name>-<latest_version.gem > <gem_name>-GEM.spec",
"BuildRequires: perl(MODULE)",
"BuildRequires: perl(:VERSION) >= 5.30",
"Requires: perl(:MODULE_COMPAT_%(eval `perl -V:version`; echo USDversion))"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/packaging_and_distributing_software/advanced-topics
|
Chapter 2. Acknowledgments
|
Chapter 2. Acknowledgments Red Hat Ceph Storage 5 project is seeing amazing growth in the quality and quantity of contributions from individuals and organizations in the Ceph community. We would like to thank all members of the Red Hat Ceph Storage team, all of the individual contributors in the Ceph community, and additionally, but not limited to, the contributions from organizations such as: Intel(R) Fujitsu (R) UnitedStack Yahoo TM Ubuntu Kylin Mellanox (R) CERN TM Deutsche Telekom Mirantis (R) SanDisk TM SUSE
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.2_release_notes/acknowledgments
|
Chapter 43. Writing Handlers
|
Chapter 43. Writing Handlers Abstract JAX-WS provides a flexible plug-in framework for adding message processing modules to an application. These modules, known as handlers, are independent of the application level code and can provide low-level message processing capabilities. 43.1. Handlers: An Introduction Overview When a service proxy invokes an operation on a service, the operation's parameters are passed to Apache CXF where they are built into a message and placed on the wire. When the message is received by the service, Apache CXF reads the message from the wire, reconstructs the message, and then passes the operation parameters to the application code responsible for implementing the operation. When the application code is finished processing the request, the reply message undergoes a similar chain of events on its trip to the service proxy that originated the request. This is shown in Figure 43.1, "Message Exchange Path" . Figure 43.1. Message Exchange Path JAX-WS defines a mechanism for manipulating the message data between the application level code and the network. For example, you might want the message data passed over the open network to be encrypted using a proprietary encryption mechanism. You could write a JAX-WS handler that encrypted and decrypted the data. Then you could insert the handler into the message processing chains of all clients and servers. As shown in Figure 43.2, "Message Exchange Path with Handlers" , the handlers are placed in a chain that is traversed between the application level code and the transport code that places the message onto the network. Figure 43.2. Message Exchange Path with Handlers Handler types The JAX-WS specification defines two basic handler types: Logical Handler Logical handlers can process the message payload and the properties stored in the message context. For example, if the application uses pure XML messages, the logical handlers have access to the entire message. If the application uses SOAP messages, the logical handlers have access to the contents of the SOAP body. They do not have access to either the SOAP headers or any attachments unless they were placed into the message context. Logical handlers are placed closest to the application code on the handler chain. This means that they are executed first when a message is passed from the application code to the transport. When a message is received from the network and passed back to the application code, the logical handlers are executed last. Protocol Handler Protocol handlers can process the entire message received from the network and the properties stored in the message context. For example, if the application uses SOAP messages, the protocol handlers would have access to the contents of the SOAP body, the SOAP headers, and any attachments. Protocol handlers are placed closest to the transport on the handler chain. This means that they are executed first when a message is received from the network. When a message is sent to the network from the application code, the protocol handlers are executed last. Note The only protocol handler supported by Apache CXF is specific to SOAP. Implementation of handlers The differences between the two handler types are very subtle and they share a common base interface. Because of their common parentage, logical handlers and protocol handlers share a number of methods that must be implemented, including: handleMessage() The handleMessage() method is the central method in any handler. It is the method responsible for processing normal messages. handleFault() handleFault() is the method responsible for processing fault messages. close() close() is called on all executed handlers in a handler chain when a message has reached the end of the chain. It is used to clean up any resources consumed during message processing. The differences between the implementation of a logical handler and the implementation of a protocol handler revolve around the following: The specific interface that is implemented All handlers implement an interface that derives from the Handler interface. Logical handlers implement the LogicalHandler interface. Protocol handlers implement protocol specific extensions of the Handler interface. For example, SOAP handlers implement the SOAPHandler interface. The amount of information available to the handler Protocol handlers have access to the contents of messages and all of the protocol specific information that is packaged with the message content. Logical handlers can only access the contents of the message. Logical handlers have no knowledge of protocol details. Adding handlers to an application To add a handler to an application you must do the following: Determine whether the handler is going to be used on the service providers, the consumers, or both. Determine which type of handler is the most appropriate for the job. Implement the proper interface. To implement a logical handler see Section 43.2, "Implementing a Logical Handler" . To implement a protocol handler see Section 43.4, "Implementing a Protocol Handler" . Configure your endpoint(s) to use the handlers. See Section 43.10, "Configuring Endpoints to Use Handlers" . 43.2. Implementing a Logical Handler Overview Logical handlers implement the javax.xml.ws.handler.LogicalHandler interface. The LogicalHandler interface, shown in Example 43.1, "LogicalHandler Synopsis" passes a LogicalMessageContext object to the handleMessage() method and the handleFault() method. The context object provides access to the body of the message and to any properties set into the message exchange's context. Example 43.1. LogicalHandler Synopsis Procedure To implement a logical hander you do the following: Implement any Section 43.6, "Initializing a Handler" logic required by the handler. Implement the Section 43.3, "Handling Messages in a Logical Handler" logic. Implement the Section 43.7, "Handling Fault Messages" logic. Implement the logic for Section 43.8, "Closing a Handler" the handler when it is finished. Implement any logic for Section 43.9, "Releasing a Handler" the handler's resources before it is destroyed. 43.3. Handling Messages in a Logical Handler Overview Normal message processing is handled by the handleMessage() method. The handleMessage() method receives a LogicalMessageContext object that provides access to the message body and any properties stored in the message context. The handleMessage() method returns either true or false depending on how message processing is to continue. It can also throw an exception. Getting the message data The LogicalMessageContext object passed into logical message handlers allows access to the message body using the context's getMessage() method. The getMessage() method, shown in Example 43.2, "Method for Getting the Message Payload in a Logical Handler" , returns the message payload as a LogicalMessage object. Example 43.2. Method for Getting the Message Payload in a Logical Handler LogicalMessage getMessage Once you have the LogicalMessage object, you can use it to manipulate the message body. The LogicalMessage interface, shown in Example 43.3, "Logical Message Holder" , has getters and setters for working with the actual message body. Example 43.3. Logical Message Holder LogicalMessage Source getPayload Object getPayload JAXBContext context setPayload Object payload JAXBContext context setPayload Source payload Important The contents of the message payload are determined by the type of binding in use. The SOAP binding only allows access to the SOAP body of the message. The XML binding allows access to the entire message body. Working with the message body as an XML object One pair of getters and setters of the logical message work with the message payload as a javax.xml.transform.dom.DOMSource object. The getPayload() method that has no parameters returns the message payload as a DOMSource object. The returned object is the actual message payload. Any changes made to the returned object change the message body immediately. You can replace the body of the message with a DOMSource object using the setPayload() method that takes the single Source object. Working with the message body as a JAXB object The other pair of getters and setters allow you to work with the message payload as a JAXB object. They use a JAXBContext object to transform the message payload into JAXB objects. To use the JAXB objects you do the following: Get a JAXBContext object that can manage the data types in the message body. For information on creating a JAXBContext object see Chapter 39, Using A JAXBContext Object . Get the message body as shown in Example 43.4, "Getting the Message Body as a JAXB Object" . Example 43.4. Getting the Message Body as a JAXB Object Cast the returned object to the proper type. Manipulate the message body as needed. Put the updated message body back into the context as shown in Example 43.5, "Updating the Message Body Using a JAXB Object" . Example 43.5. Updating the Message Body Using a JAXB Object Working with context properties The logical message context passed into a logical handler is an instance of the application's message context and can access all of the properties stored in it. Handlers have access to properties at both the APPLICATION scope and the HANDLER scope. Like the application's message context, the logical message context is a subclass of Java Map. To access the properties stored in the context, you use the get() method and put() method inherited from the Map interface. By default, any properties you set in the message context from inside a logical handler are assigned a scope of HANDLER . If you want the application code to be able to access the property you need to use the context's setScope() method to explicitly set the property's scope to APPLICATION. For more information on working with properties in the message context see Section 42.1, "Understanding Contexts" . Determining the direction of the message It is often important to know the direction a message is passing through the handler chain. For example, you would want to retrieve a security token from incoming requests and attach a security token to an outgoing response. The direction of the message is stored in the message context's outbound message property. You retrieve the outbound message property from the message context using the MessageContext.MESSAGE_OUTBOUND_PROPERTY key as shown in Example 43.6, "Getting the Message's Direction from the SOAP Message Context" . Example 43.6. Getting the Message's Direction from the SOAP Message Context The property is stored as a Boolean object. You can use the object's booleanValue() method to determine the property's value. If the property is set to true, the message is outbound. If the property is set to false the message is inbound. Determining the return value How the handleMessage() method completes its message processing has a direct impact on how message processing proceeds. It can complete by doing one of the following actions: Return true-Returning true signals to the Apache CXF runtime that message processing should continue normally. The handler, if any, has its handleMessage() invoked. Return false-Returning false signals to the Apache CXF runtime that normal message processing must stop. How the runtime proceeds depends on the message exchange pattern in use for the current message . For request-response message exchanges the following happens: The direction of message processing is reversed. For example, if a request is being processed by a service provider, the message stops progressing toward the service's implementation object. Instead, it is sent back towards the binding for return to the consumer that originated the request. Any message handlers that reside along the handler chain in the new processing direction have their handleMessage() method invoked in the order in which they reside in the chain. When the message reaches the end of the handler chain it is dispatched. For one-way message exchanges the following happens: Message processing stops. All previously invoked message handlers have their close() method invoked. The message is dispatched. Throw a ProtocolException exception-Throwing a ProtocolException exception, or a subclass of this exception, signals the Apache CXF runtime that fault message processing is beginning. How the runtime proceeds depends on the message exchange pattern in use for the current message . For request-response message exchanges the following happens: If the handler has not already created a fault message, the runtime wraps the message in a fault message. The direction of message processing is reversed. For example, if a request is being processed by a service provider, the message stops progressing toward the service's implementation object. Instead, it is sent back towards the binding for return to the consumer that originated the request. Any message handlers that reside along the handler chain in the new processing direction have their handleFault() method invoked in the order in which they reside in the chain. When the fault message reaches the end of the handler chain it is dispatched. For one-way message exchanges the following happens: If the handler has not already created a fault message, the runtime wraps the message in a fault message. Message processing stops. All previously invoked message handlers have their close() method invoked. The fault message is dispatched. Throw any other runtime exception-Throwing a runtime exception other than a ProtocolException exception signals the Apache CXF runtime that message processing is to stop. All previously invoked message handlers have the close() method invoked and the exception is dispatched. If the message is part of a request-response message exchange, the exception is dispatched so that it is returned to the consumer that originated the request. Example Example 43.7, "Logical Message Handler Message Processing" shows an implementation of handleMessage() message for a logical message handler that is used by a service consumer. It processes requests before they are sent to the service provider. Example 43.7. Logical Message Handler Message Processing The code in Example 43.7, "Logical Message Handler Message Processing" does the following: Checks if the message is an outbound request. If the message is an outbound request, the handler does additional message processing. Gets the LogicalMessage representation of the message payload from the message context. Gets the actual message payload as a JAXB object. Checks to make sure the request is of the correct type. If it is, the handler continues processing the message. Checks the value of the sum. If it is less than the threshold of 20 then it builds a response and returns it to the client. Builds the response. Returns false to stop message processing and return the response to the client. Throws a runtime exception if the message is not of the correct type. This exception is returned to the client. Returns true if the message is an inbound response or the sum does not meet the threshold. Message processing continues normally. Throws a ProtocolException if a JAXB marshalling error is encountered. The exception is passed back to the client after it is processed by the handleFault() method of the handlers between the current handler and the client. 43.4. Implementing a Protocol Handler Overview Protocol handlers are specific to the protocol in use. Apache CXF provides the SOAP protocol handler as specified by JAX-WS. A SOAP protocol handler implements the javax.xml.ws.handler.soap.SOAPHandler interface. The SOAPHandler interface, shown in Example 43.8, "SOAPHandler Synopsis" , uses a SOAP specific message context that provides access to the message as a SOAPMessage object. It also allows you to access the SOAP headers. Example 43.8. SOAPHandler Synopsis In addition to using a SOAP specific message context, SOAP protocol handlers require that you implement an additional method called getHeaders() . This additional method returns the QNames of the header blocks the handler can process. Procedure To implement a logical hander do the following: Implement any Section 43.6, "Initializing a Handler" logic required by the handler. Implement the Section 43.5, "Handling Messages in a SOAP Handler" logic. Implement the Section 43.7, "Handling Fault Messages" logic. Implement the getHeaders() method. Implement the logic for Section 43.8, "Closing a Handler" the handler when it is finished. Implement any logic for Section 43.9, "Releasing a Handler" the handler's resources before it is destroyed. Implementing the getHeaders() method The getHeaders() , shown in Example 43.9, "The SOAPHander.getHeaders() Method" , method informs the Apache CXF runtime what SOAP headers the handler is responsible for processing. It returns the QNames of the outer element of each SOAP header the handler understands. Example 43.9. The SOAPHander.getHeaders() Method Set<QName> getHeaders For many cases simply returning null is sufficient. However, if the application uses the mustUnderstand attribute of any of the SOAP headers, then it is important to specify the headers understood by the application's SOAP handlers. The runtime checks the set of SOAP headers that all of the registered handlers understand against the list of headers with the mustUnderstand attribute set to true . If any of the flagged headers are not in the list of understood headers, the runtime rejects the message and throws a SOAP must understand exception. 43.5. Handling Messages in a SOAP Handler Overview Normal message processing is handled by the handleMessage() method. The handleMessage() method receives a SOAPMessageContext object that provides access to the message body as a SOAPMessage object and the SOAP headers associated with the message. In addition, the context provides access to any properties stored in the message context. The handleMessage() method returns either true or false depending on how message processing is to continue. It can also throw an exception. Working with the message body You can get the SOAP message using the SOAP message context's getMessage() method. It returns the message as a live SOAPMessage object. Any changes to the message in the handler are automatically reflected in the message stored in the context. If you wish to replace the existing message with a new one, you can use the context's setMessage() method. The setMessage() method takes a SOAPMessage object. Getting the SOAP headers You can access the SOAP message's headers using the SOAPMessage object's getHeader() method. This will return the SOAP header as a SOAPHeader object that you will need to inspect to find the header elements you wish to process. The SOAP message context provides a getHeaders() method, shown in Example 43.10, "The SOAPMessageContext.getHeaders() Method" , that will return an array containing JAXB objects for the specified SOAP headers. Example 43.10. The SOAPMessageContext.getHeaders() Method Ojbect[] getHeaders QName header JAXBContext context boolean allRoles You specify the headers using the QName of their element. You can further limit the headers that are returned by setting the allRoles parameter to false. That instructs the runtime to only return the SOAP headers that are applicable to the active SOAP roles. If no headers are found, the method returns an empty array. For more information about instantiating a JAXBContext object see Chapter 39, Using A JAXBContext Object . Working with context properties The SOAP message context passed into a logical handler is an instance of the application's message context and can access all of the properties stored in it. Handlers have access to properties at both the APPLICATION scope and the Handler scope. Like the application's message context, the SOAP message context is a subclass of Java Map. To access the properties stored in the context, you use the get() method and put() method inherited from the Map interface. By default, any properties you set in the context from inside a logical handler will be assigned a scope of HANDLER . If you want the application code to be able to access the property you need to use the context's setScope() method to explicitly set the property's scope to APPLICATION. For more information on working with properties in the message context see Section 42.1, "Understanding Contexts" . Determining the direction of the message It is often important to know the direction a message is passing through the handler chain. For example, you would want to add headers to an outgoing message and strip headers from an incoming message. The direction of the message is stored in the message context's outbound message property. You retrieve the outbound message property from the message context using the MessageContext.MESSAGE_OUTBOUND_PROPERTY key as shown in Example 43.11, "Getting the Message's Direction from the SOAP Message Context" . Example 43.11. Getting the Message's Direction from the SOAP Message Context The property is stored as a Boolean object. You can use the object's booleanValue() method to determine the property's value. If the property is set to true, the message is outbound. If the property is set to false the message is inbound. Determining the return value How the handleMessage() method completes its message processing has a direct impact on how message processing proceeds. It can complete by doing one of the following actions: return true-Returning true signals to the Apache CXF runtime that message processing should continue normally. The handler, if any, has its handleMessage() invoked. return false-Returning false signals to the Apache CXF runtime that normal message processing is to stop. How the runtime proceeds depends on the message exchange pattern in use for the current message . For request-response message exchanges the following happens: The direction of message processing is reversed. For example, if a request is being processed by a service provider, the message will stop progressing toward the service's implementation object. It will instead be sent back towards the binding for return to the consumer that originated the request. Any message handlers that reside along the handler chain in the new processing direction have their handleMessage() method invoked in the order in which they reside in the chain. When the message reaches the end of the handler chain it is dispatched. For one-way message exchanges the following happens: Message processing stops. All previously invoked message handlers have their close() method invoked. The message is dispatched. throw a ProtocolException exception-Throwing a ProtocolException exception, or a subclass of this exception, signals the Apache CXF runtime that fault message processing is to start. How the runtime proceeds depends on the message exchange pattern in use for the current message . For request-response message exchanges the following happens: If the handler has not already created a fault message, the runtime wraps the message in a fault message. The direction of message processing is reversed. For example, if a request is being processed by a service provider, the message will stop progressing toward the service's implementation object. It will be sent back towards the binding for return to the consumer that originated the request. Any message handlers that reside along the handler chain in the new processing direction have their handleFault() method invoked in the order in which they reside in the chain. When the fault message reaches the end of the handler chain it is dispatched. For one-way message exchanges the following happens: If the handler has not already created a fault message, the runtime wraps the message in a fault message. Message processing stops. All previously invoked message handlers have their close() method invoked. The fault message is dispatched. throw any other runtime exception-Throwing a runtime exception other than a ProtocolException exception signals the Apache CXF runtime that message processing is to stop. All previously invoked message handlers have the close() method invoked and the exception is dispatched. If the message is part of a request-response message exchange the exception is dispatched so that it is returned to the consumer that originated the request. Example Example 43.12, "Handling a Message in a SOAP Handler" shows a handleMessage() implementation that prints the SOAP message to the screen. Example 43.12. Handling a Message in a SOAP Handler The code in Example 43.12, "Handling a Message in a SOAP Handler" does the following: Retrieves the outbound property from the message context. Tests the messages direction and prints the appropriate message. Retrieves the SOAP message from the context. Prints the message to the console. 43.6. Initializing a Handler Overview When the runtime creates an instance of a handler, it creates all of the resources the hander needs to process messages. While you can place all of the logic for doing this in the handler's constructor, it may not be the most appropriate place. The handler framework performs a number of optional steps when it instantiates a handler. You can add resource injection and other initialization logic that will be executed during the optional steps. You do not have to provide any initialization methods for a handler. Order of initialization The Apache CXF runtime initializes a handler in the following manner: The handler's constructor is called. Any resources that are specified by the @Resource annotation are injected. The method decorated with @PostConstruct annotation, if it is present, is called. Note Methods decorated with the @PostConstruct annotation must have a void return type and have no parameters. The handler is place in the Ready state. 43.7. Handling Fault Messages Overview Handlers use the handleFault() method for processing fault messages when a ProtocolException exception is thrown during message processing. The handleFault() method receives either a LogicalMessageContext object or SOAPMessageContext object depending on the type of handler. The received context gives the handler's implementation access to the message payload. The handleFault() method returns either true or false, depending on how fault message processing is to proceed. It can also throw an exception. Getting the message payload The context object received by the handleFault() method is similar to the one received by the handleMessage() method. You use the context's getMessage() method to access the message payload in the same way. The only difference is the payload contained in the context. For more information on working with a LogicalMessageContext see Section 43.3, "Handling Messages in a Logical Handler" . For more information on working with a SOAPMessageContext see Section 43.5, "Handling Messages in a SOAP Handler" . Determining the return value How the handleFault() method completes its message processing has a direct impact on how message processing proceeds. It completes by performing one of the following actions: Return true Returning true signals that fault processing should continue normally. The handleFault() method of the handler in the chain will be invoked. Return false Returning false signals that fault processing stops. The close() method of the handlers that were invoked in processing the current message are invoked and the fault message is dispatched. Throw an exception Throwing an exception stops fault message processing. The close() method of the handlers that were invoked in processing the current message are invoked and the exception is dispatched. Example Example 43.13, "Handling a Fault in a Message Handler" shows an implementation of handleFault() that prints the message body to the screen. Example 43.13. Handling a Fault in a Message Handler 43.8. Closing a Handler When a handler chain is finished processing a message, the runtime calls each executed handler's close() method. This is the appropriate place to clean up any resources that were used by the handler during message processing or resetting any properties to a default state. If a resource needs to persist beyond a single message exchange, you should not clean it up during in the handler's close() method. 43.9. Releasing a Handler Overview The runtime releases a handler when the service or service proxy to which the handler is bound is shutdown. The runtime will invoke an optional release method before invoking the handler's destructor. This optional release method can be used to release any resources used by the handler or perform other actions that would not be appropriate in the handler's destructor. You do not have to provide any clean-up methods for a handler. Order of release The following happens when the handler is released: The handler finishes processing any active messages. The runtime invokes the method decorated with the @PreDestroy annotation. This method should clean up any resources used by the handler. The handler's destructor is called. 43.10. Configuring Endpoints to Use Handlers 43.10.1. Programmatic Configuration 43.10.1.1. Adding a Handler Chain to a Consumer Overview Adding a handler chain to a consumer involves explicitly building the chain of handlers. Then you set the handler chain directly on the service proxy's Binding object. Important Any handler chains configured using the Spring configuration override the handler chains configured programmaticaly. Procedure To add a handler chain to a consumer you do the following: Create a List<Handler> object to hold the handler chain. Create an instance of each handler that will be added to the chain. Add each of the instantiated handler objects to the list in the order they are to be invoked by the runtime. Get the Binding object from the service proxy. Apache CXF provides an implementation of the Binding interface called org.apache.cxf.jaxws.binding.DefaultBindingImpl . Set the handler chain on the proxy using the Binding object's setHandlerChain() method. Example Example 43.14, "Adding a Handler Chain to a Consumer" shows code for adding a handler chain to a consumer. Example 43.14. Adding a Handler Chain to a Consumer The code in Example 43.14, "Adding a Handler Chain to a Consumer" does the following: Instantiates a handler. Creates a List object to hold the chain. Adds the handler to the chain. Gets the Binding object from the proxy as a DefaultBindingImpl object. Assigns the handler chain to the proxy's binding. 43.10.1.2. Adding a Handler Chain to a Service Provider Overview You add a handler chain to a service provider by decorating either the SEI or the implementation class with the @HandlerChain annotation. The annotation points to a meta-data file defining the handler chain used by the service provider. Procedure To add handler chain to a service provider you do the following: Decorate the provider's implementation class with the @HandlerChain annotation. Create a handler configuration file that defines the handler chain. The @HandlerChain annotation The javax.jws.HandlerChain annotation decorates service provider's implementation class. It instructs the runtime to load the handler chain configuration file specified by its file property. The annotation's file property supports two methods for identifying the handler configuration file to load: a URL a relative path name Example 43.15, "Service Implementation that Loads a Handler Chain" shows a service provider implementation that will use the handler chain defined in a file called handlers.xml . handlers.xml must be located in the directory from which the service provider is run. Example 43.15. Service Implementation that Loads a Handler Chain Handler configuration file The handler configuration file defines a handler chain using the XML grammar that accompanies JSR 109 (Web Services for Java EE, Version 1.2). This grammar is defined in the http://java.sun.com/xml/ns/javaee . The root element of the handler configuration file is the handler-chains element. The handler-chains element has one or more handler-chain elements. The handler-chain element define a handler chain. Table 43.1, "Elements Used to Define a Server-Side Handler Chain" describes the handler-chain element's children. Table 43.1. Elements Used to Define a Server-Side Handler Chain Element Description handler Contains the elements that describe a handler. service-name-pattern Specifies the QName of the WSDL service element defining the service to which the handler chain is bound. You can use * as a wildcard when defining the QName. port-name-pattern Specifies the QName of the WSDL port element defining the endpoint to which the handler chain is bound. You can use * as a wildcard when defining the QName. protocol-binding Specifies the message binding for which the handler chain is used. The binding is specified as a URI or using one of the following aliases: #\#SOAP11_HTTP , \##SOAP11_HTTP_MTOM , \##SOAP12_HTTP , \##SOAP12_HTTP_MTOM , or \#\#XML_HTTP . For more information about message binding URIs see Chapter 23, Apache CXF Binding IDs . The handler-chain element is only required to have a single handler element as a child. It can, however, support as many handler elements as needed to define the complete handler chain. The handlers in the chain are executed in the order they specified in the handler chain definition. Important The final order of execution will be determined by sorting the specified handlers into logical handlers and protocol handlers. Within the groupings, the order specified in the configuration will be used. The other children, such as protocol-binding , are used to limit the scope of the defined handler chain. For example, if you use the service-name-pattern element, the handler chain will only be attached to service providers whose WSDL port element is a child of the specified WSDL service element. You can only use one of these limiting children in a handler element. The handler element defines an individual handler in a handler chain. Its handler-class child element specifies the fully qualified name of the class implementing the handler. The handler element can also have an optional handler-name element that specifies a unique name for the handler. Example 43.16, "Handler Configuration File" shows a handler configuration file that defines a single handler chain. The chain is made up of two handlers. Example 43.16. Handler Configuration File 43.10.2. Spring Configuration Overview The easiest way to configure an endpoint to use a handler chain is to define the chain in the endpoint's configuration. This is done by adding a jaxwxs:handlers child to the element configuring the endpoint. Important A handler chain added through the configuration file takes precedence over a handler chain configured programatically. Procedure To configure an endpoint to load a handler chain you do the following: If the endpoint does not already have a configuration element, add one. For more information on configuring Apache CXF endpoints see Chapter 17, Configuring JAX-WS Endpoints . Add a jaxws:handlers child element to the endpoint's configuration element. For each handler in the chain, add a bean element specifying the class that implements the handler. If your handler implementation is used in more than one place you can reference a bean element using the ref element. The handlers element The jaxws:handlers element defines a handler chain in an endpoint's configuration. It can appear as a child to all of the JAX-WS endpoint configuration elements. These are: jaxws:endpoint configures a service provider. jaxws:server also configures a service provider. jaxws:client configures a service consumer. You add handlers to the handler chain in one of two ways: add a bean element defining the implementation class use a ref element to refer to a named bean element from elsewhere in the configuration file The order in which the handlers are defined in the configuration is the order in which they will be executed. The order may be modified if you mix logical handlers and protocol handlers. The run time will sort them into the proper order while maintaining the basic order specified in the configuration. Example Example 43.17, "Configuring an Endpoint to Use a Handler Chain In Spring" shows the configuration for a service provider that loads a handler chain. Example 43.17. Configuring an Endpoint to Use a Handler Chain In Spring
|
[
"public interface LogicalHandler extends Handler { boolean handleMessage(LogicalMessageContext context); boolean handleFault(LogicalMessageContext context); void close(LogicalMessageContext context); }",
"JAXBContext jaxbc = JAXBContext(myObjectFactory.class); Object body = message.getPayload(jaxbc);",
"message.setPayload(body, jaxbc);",
"Boolean outbound; outbound = (Boolean)smc.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY);",
"public class SmallNumberHandler implements LogicalHandler<LogicalMessageContext> { public final boolean handleMessage(LogicalMessageContext messageContext) { try { boolean outbound = (Boolean)messageContext.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY); if (outbound) { LogicalMessage msg = messageContext.getMessage(); JAXBContext jaxbContext = JAXBContext.newInstance(ObjectFactory.class); Object payload = msg.getPayload(jaxbContext); if (payload instanceof JAXBElement) { payload = ((JAXBElement)payload).getValue(); } if (payload instanceof AddNumbers) { AddNumbers req = (AddNumbers)payload; int a = req.getArg0(); int b = req.getArg1(); int answer = a + b; if (answer < 20) { AddNumbersResponse resp = new AddNumbersResponse(); resp.setReturn(answer); msg.setPayload(new ObjectFactory().createAddNumbersResponse(resp), jaxbContext); return false; } } else { throw new WebServiceException(\"Bad Request\"); } } return true; } catch (JAXBException ex) { throw new ProtocolException(ex); } } }",
"public interface SOAPHandler extends Handler { boolean handleMessage(SOAPMessageContext context); boolean handleFault(SOAPMessageContext context); void close(SOAPMessageContext context); Set<QName> getHeaders() }",
"Boolean outbound; outbound = (Boolean)smc.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY);",
"public boolean handleMessage(SOAPMessageContext smc) { PrintStream out; Boolean outbound = (Boolean)smc.get(MessageContext.MESSAGE_OUTBOUND_PROPERTY); if (outbound.booleanValue()) { out.println(\"\\nOutbound message:\"); } else { out.println(\"\\nInbound message:\"); } SOAPMessage message = smc.getMessage(); message.writeTo(out); out.println(); return true; }",
"public final boolean handleFault(LogicalMessageContext messageContext) { System.out.println(\"handleFault() called with message:\"); LogicalMessage msg=messageContext.getMessage(); System.out.println(msg.getPayload()); return true; }",
"import javax.xml.ws.BindingProvider; import javax.xml.ws.handler.Handler; import java.util.ArrayList; import java.util.List; import org.apache.cxf.jaxws.binding.DefaultBindingImpl; SmallNumberHandler sh = new SmallNumberHandler(); List<Handler> handlerChain = new ArrayList<Handler>(); handlerChain.add(sh); DefaultBindingImpl binding = ((BindingProvider)proxy).getBinding(); binding.getBinding().setHandlerChain(handlerChain);",
"import javax.jws.HandlerChain; import javax.jws.WebService; @WebService(name = \"AddNumbers\", targetNamespace = \"http://apache.org/handlers\", portName = \"AddNumbersPort\", endpointInterface = \"org.apache.handlers.AddNumbers\", serviceName = \"AddNumbersService\") @HandlerChain(file = \"handlers.xml\") public class AddNumbersImpl implements AddNumbers { }",
"<handler-chains xmlns=\"http://java.sun.com/xml/ns/javaee\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://java.sun.com/xml/ns/javaee\"> <handler-chain> <handler> <handler-name>LoggingHandler</handler-name> <handler-class>demo.handlers.common.LoggingHandler</handler-class> </handler> <handler> <handler-name>AddHeaderHandler</handler-name> <handler-class>demo.handlers.common.AddHeaderHandler</handler-class> </handler> </handler-chain> </handler-chains>",
"<beans xmlns:jaxws=\"http://cxf.apache.org/jaxws\" schemaLocation=\" http://cxf.apache.org/jaxws http://cxf.apache.org/schemas/jaxws.xsd ...\"> <jaxws:endpoint id=\"HandlerExample\" implementor=\"org.apache.cxf.example.DemoImpl\" address=\"http://localhost:8080/demo\"> <jaxws:handlers> <bean class=\"demo.handlers.common.LoggingHandler\" /> <bean class=\"demo.handlers.common.AddHeaderHandler\" /> </jaxws:handlers> </jaws:endpoint> </beans>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwshandlers
|
1.5. Devices
|
1.5. Devices Brocade BFA driver The Brocade BFA driver is considered a Technology Preview feature in Red Hat Enterprise Linux 6. The BFA driver supports Brocade FibreChannel and FCoE mass storage adapters. SR-IOV on the be2net driver, BZ# 602451 The SR-IOV functionality of the Emulex be2net driver is considered a Technology Preview in Red Hat Enterprise Linux 6.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/devices_tp
|
Chapter 40. NodeService
|
Chapter 40. NodeService 40.1. ExportNodes GET /v1/export/nodes 40.1.1. Description 40.1.2. Parameters 40.1.2.1. Query Parameters Name Description Required Default Pattern timeout - null query - null 40.1.3. Return Type Stream_result_of_v1ExportNodeResponse 40.1.4. Content Type application/json 40.1.5. Responses Table 40.1. HTTP Response Codes Code Message Datatype 200 A successful response.(streaming responses) Stream_result_of_v1ExportNodeResponse 0 An unexpected error response. GooglerpcStatus 40.1.6. Samples 40.1.7. Common object reference 40.1.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 40.1.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 40.1.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 40.1.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 40.1.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 40.1.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 40.1.7.7. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 40.1.7.8. NodeScanScanner Enum Values SCANNER SCANNER_V4 40.1.7.9. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 40.1.7.9.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 40.1.7.10. StorageCVEInfo Field Name Required Nullable Type Description Format cve String summary String link String publishedOn Date This indicates the timestamp when the cve was first published in the cve feeds. date-time createdAt Date Time when the CVE was first seen in the system. date-time lastModified Date date-time scoreVersion StorageCVEInfoScoreVersion V2, V3, UNKNOWN, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 references List of StorageCVEInfoReference cvssMetrics List of StorageCVSSScore 40.1.7.11. StorageCVEInfoReference Field Name Required Nullable Type Description Format URI String tags List of string 40.1.7.12. StorageCVEInfoScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 UNKNOWN 40.1.7.13. StorageCVSSScore Field Name Required Nullable Type Description Format source StorageSource SOURCE_UNKNOWN, SOURCE_RED_HAT, SOURCE_OSV, SOURCE_NVD, url String cvssv2 StorageCVSSV2 cvssv3 StorageCVSSV3 40.1.7.14. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 40.1.7.15. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 40.1.7.16. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 40.1.7.17. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 40.1.7.18. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 40.1.7.19. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 40.1.7.20. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 40.1.7.21. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 40.1.7.22. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 40.1.7.23. StorageContainerRuntime Enum Values UNKNOWN_CONTAINER_RUNTIME DOCKER_CONTAINER_RUNTIME CRIO_CONTAINER_RUNTIME 40.1.7.24. StorageContainerRuntimeInfo Field Name Required Nullable Type Description Format type StorageContainerRuntime UNKNOWN_CONTAINER_RUNTIME, DOCKER_CONTAINER_RUNTIME, CRIO_CONTAINER_RUNTIME, version String 40.1.7.25. StorageEmbeddedNodeScanComponent Field Name Required Nullable Type Description Format name String version String vulns List of StorageEmbeddedVulnerability vulnerabilities List of StorageNodeVulnerability priority String int64 topCvss Float float riskScore Float float 40.1.7.26. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, cvssMetrics List of StorageCVSSScore nvdCvss Float float 40.1.7.27. StorageEmbeddedVulnerabilityScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 40.1.7.28. StorageNode Field Name Required Nullable Type Description Format id String A unique ID identifying this node. name String The (host)name of the node. Might or might not be the same as ID. taints List of StorageTaint clusterId String clusterName String labels Map of string annotations Map of string joinedAt Date date-time internalIpAddresses List of string externalIpAddresses List of string containerRuntimeVersion String Use container_runtime.version containerRuntime StorageContainerRuntimeInfo kernelVersion String operatingSystem String From NodeInfo. Operating system reported by the node (ex: linux). osImage String From NodeInfo. OS image reported by the node from /etc/os-release. kubeletVersion String kubeProxyVersion String lastUpdated Date date-time k8sUpdated Date Time we received an update from Kubernetes. date-time scan StorageNodeScan components Integer int32 cves Integer int32 fixableCves Integer int32 priority String int64 riskScore Float float topCvss Float float notes List of StorageNodeNote 40.1.7.29. StorageNodeNote Enum Values MISSING_SCAN_DATA 40.1.7.30. StorageNodeScan Field Name Required Nullable Type Description Format scanTime Date date-time operatingSystem String components List of StorageEmbeddedNodeScanComponent notes List of StorageNodeScanNote scannerVersion NodeScanScanner SCANNER, SCANNER_V4, 40.1.7.31. StorageNodeScanNote Enum Values UNSET UNSUPPORTED KERNEL_UNSUPPORTED CERTIFIED_RHEL_CVES_UNAVAILABLE 40.1.7.32. StorageNodeVulnerability Field Name Required Nullable Type Description Format cveBaseInfo StorageCVEInfo cvss Float float severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, fixedBy String snoozed Boolean snoozeStart Date date-time snoozeExpiry Date date-time 40.1.7.33. StorageSource Enum Values SOURCE_UNKNOWN SOURCE_RED_HAT SOURCE_OSV SOURCE_NVD 40.1.7.34. StorageTaint Field Name Required Nullable Type Description Format key String value String taintEffect StorageTaintEffect UNKNOWN_TAINT_EFFECT, NO_SCHEDULE_TAINT_EFFECT, PREFER_NO_SCHEDULE_TAINT_EFFECT, NO_EXECUTE_TAINT_EFFECT, 40.1.7.35. StorageTaintEffect Enum Values UNKNOWN_TAINT_EFFECT NO_SCHEDULE_TAINT_EFFECT PREFER_NO_SCHEDULE_TAINT_EFFECT NO_EXECUTE_TAINT_EFFECT 40.1.7.36. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 40.1.7.37. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 40.1.7.38. StreamResultOfV1ExportNodeResponse Field Name Required Nullable Type Description Format result V1ExportNodeResponse error GooglerpcStatus 40.1.7.39. V1ExportNodeResponse Field Name Required Nullable Type Description Format node StorageNode 40.2. ListNodes GET /v1/nodes/{clusterId} 40.2.1. Description 40.2.2. Parameters 40.2.2.1. Path Parameters Name Description Required Default Pattern clusterId X null 40.2.3. Return Type V1ListNodesResponse 40.2.4. Content Type application/json 40.2.5. Responses Table 40.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1ListNodesResponse 0 An unexpected error response. GooglerpcStatus 40.2.6. Samples 40.2.7. Common object reference 40.2.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 40.2.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 40.2.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 40.2.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 40.2.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 40.2.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 40.2.7.7. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 40.2.7.8. NodeScanScanner Enum Values SCANNER SCANNER_V4 40.2.7.9. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 40.2.7.9.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 40.2.7.10. StorageCVEInfo Field Name Required Nullable Type Description Format cve String summary String link String publishedOn Date This indicates the timestamp when the cve was first published in the cve feeds. date-time createdAt Date Time when the CVE was first seen in the system. date-time lastModified Date date-time scoreVersion StorageCVEInfoScoreVersion V2, V3, UNKNOWN, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 references List of StorageCVEInfoReference cvssMetrics List of StorageCVSSScore 40.2.7.11. StorageCVEInfoReference Field Name Required Nullable Type Description Format URI String tags List of string 40.2.7.12. StorageCVEInfoScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 UNKNOWN 40.2.7.13. StorageCVSSScore Field Name Required Nullable Type Description Format source StorageSource SOURCE_UNKNOWN, SOURCE_RED_HAT, SOURCE_OSV, SOURCE_NVD, url String cvssv2 StorageCVSSV2 cvssv3 StorageCVSSV3 40.2.7.14. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 40.2.7.15. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 40.2.7.16. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 40.2.7.17. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 40.2.7.18. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 40.2.7.19. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 40.2.7.20. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 40.2.7.21. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 40.2.7.22. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 40.2.7.23. StorageContainerRuntime Enum Values UNKNOWN_CONTAINER_RUNTIME DOCKER_CONTAINER_RUNTIME CRIO_CONTAINER_RUNTIME 40.2.7.24. StorageContainerRuntimeInfo Field Name Required Nullable Type Description Format type StorageContainerRuntime UNKNOWN_CONTAINER_RUNTIME, DOCKER_CONTAINER_RUNTIME, CRIO_CONTAINER_RUNTIME, version String 40.2.7.25. StorageEmbeddedNodeScanComponent Field Name Required Nullable Type Description Format name String version String vulns List of StorageEmbeddedVulnerability vulnerabilities List of StorageNodeVulnerability priority String int64 topCvss Float float riskScore Float float 40.2.7.26. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, cvssMetrics List of StorageCVSSScore nvdCvss Float float 40.2.7.27. StorageEmbeddedVulnerabilityScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 40.2.7.28. StorageNode Field Name Required Nullable Type Description Format id String A unique ID identifying this node. name String The (host)name of the node. Might or might not be the same as ID. taints List of StorageTaint clusterId String clusterName String labels Map of string annotations Map of string joinedAt Date date-time internalIpAddresses List of string externalIpAddresses List of string containerRuntimeVersion String Use container_runtime.version containerRuntime StorageContainerRuntimeInfo kernelVersion String operatingSystem String From NodeInfo. Operating system reported by the node (ex: linux). osImage String From NodeInfo. OS image reported by the node from /etc/os-release. kubeletVersion String kubeProxyVersion String lastUpdated Date date-time k8sUpdated Date Time we received an update from Kubernetes. date-time scan StorageNodeScan components Integer int32 cves Integer int32 fixableCves Integer int32 priority String int64 riskScore Float float topCvss Float float notes List of StorageNodeNote 40.2.7.29. StorageNodeNote Enum Values MISSING_SCAN_DATA 40.2.7.30. StorageNodeScan Field Name Required Nullable Type Description Format scanTime Date date-time operatingSystem String components List of StorageEmbeddedNodeScanComponent notes List of StorageNodeScanNote scannerVersion NodeScanScanner SCANNER, SCANNER_V4, 40.2.7.31. StorageNodeScanNote Enum Values UNSET UNSUPPORTED KERNEL_UNSUPPORTED CERTIFIED_RHEL_CVES_UNAVAILABLE 40.2.7.32. StorageNodeVulnerability Field Name Required Nullable Type Description Format cveBaseInfo StorageCVEInfo cvss Float float severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, fixedBy String snoozed Boolean snoozeStart Date date-time snoozeExpiry Date date-time 40.2.7.33. StorageSource Enum Values SOURCE_UNKNOWN SOURCE_RED_HAT SOURCE_OSV SOURCE_NVD 40.2.7.34. StorageTaint Field Name Required Nullable Type Description Format key String value String taintEffect StorageTaintEffect UNKNOWN_TAINT_EFFECT, NO_SCHEDULE_TAINT_EFFECT, PREFER_NO_SCHEDULE_TAINT_EFFECT, NO_EXECUTE_TAINT_EFFECT, 40.2.7.35. StorageTaintEffect Enum Values UNKNOWN_TAINT_EFFECT NO_SCHEDULE_TAINT_EFFECT PREFER_NO_SCHEDULE_TAINT_EFFECT NO_EXECUTE_TAINT_EFFECT 40.2.7.36. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 40.2.7.37. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE 40.2.7.38. V1ListNodesResponse Field Name Required Nullable Type Description Format nodes List of StorageNode 40.3. GetNode GET /v1/nodes/{clusterId}/{nodeId} 40.3.1. Description 40.3.2. Parameters 40.3.2.1. Path Parameters Name Description Required Default Pattern clusterId X null nodeId X null 40.3.3. Return Type StorageNode 40.3.4. Content Type application/json 40.3.5. Responses Table 40.3. HTTP Response Codes Code Message Datatype 200 A successful response. StorageNode 0 An unexpected error response. GooglerpcStatus 40.3.6. Samples 40.3.7. Common object reference 40.3.7.1. CVSSV2AccessComplexity Enum Values ACCESS_HIGH ACCESS_MEDIUM ACCESS_LOW 40.3.7.2. CVSSV2Authentication Enum Values AUTH_MULTIPLE AUTH_SINGLE AUTH_NONE 40.3.7.3. CVSSV3Complexity Enum Values COMPLEXITY_LOW COMPLEXITY_HIGH 40.3.7.4. CVSSV3Privileges Enum Values PRIVILEGE_NONE PRIVILEGE_LOW PRIVILEGE_HIGH 40.3.7.5. CVSSV3UserInteraction Enum Values UI_NONE UI_REQUIRED 40.3.7.6. EmbeddedVulnerabilityVulnerabilityType Enum Values UNKNOWN_VULNERABILITY IMAGE_VULNERABILITY K8S_VULNERABILITY ISTIO_VULNERABILITY NODE_VULNERABILITY OPENSHIFT_VULNERABILITY 40.3.7.7. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 40.3.7.8. NodeScanScanner Enum Values SCANNER SCANNER_V4 40.3.7.9. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 40.3.7.9.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. 40.3.7.10. StorageCVEInfo Field Name Required Nullable Type Description Format cve String summary String link String publishedOn Date This indicates the timestamp when the cve was first published in the cve feeds. date-time createdAt Date Time when the CVE was first seen in the system. date-time lastModified Date date-time scoreVersion StorageCVEInfoScoreVersion V2, V3, UNKNOWN, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 references List of StorageCVEInfoReference cvssMetrics List of StorageCVSSScore 40.3.7.11. StorageCVEInfoReference Field Name Required Nullable Type Description Format URI String tags List of string 40.3.7.12. StorageCVEInfoScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 UNKNOWN 40.3.7.13. StorageCVSSScore Field Name Required Nullable Type Description Format source StorageSource SOURCE_UNKNOWN, SOURCE_RED_HAT, SOURCE_OSV, SOURCE_NVD, url String cvssv2 StorageCVSSV2 cvssv3 StorageCVSSV3 40.3.7.14. StorageCVSSV2 Field Name Required Nullable Type Description Format vector String attackVector StorageCVSSV2AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, accessComplexity CVSSV2AccessComplexity ACCESS_HIGH, ACCESS_MEDIUM, ACCESS_LOW, authentication CVSSV2Authentication AUTH_MULTIPLE, AUTH_SINGLE, AUTH_NONE, confidentiality StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, integrity StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, availability StorageCVSSV2Impact IMPACT_NONE, IMPACT_PARTIAL, IMPACT_COMPLETE, exploitabilityScore Float float impactScore Float float score Float float severity StorageCVSSV2Severity UNKNOWN, LOW, MEDIUM, HIGH, 40.3.7.15. StorageCVSSV2AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK 40.3.7.16. StorageCVSSV2Impact Enum Values IMPACT_NONE IMPACT_PARTIAL IMPACT_COMPLETE 40.3.7.17. StorageCVSSV2Severity Enum Values UNKNOWN LOW MEDIUM HIGH 40.3.7.18. StorageCVSSV3 Field Name Required Nullable Type Description Format vector String exploitabilityScore Float float impactScore Float float attackVector StorageCVSSV3AttackVector ATTACK_LOCAL, ATTACK_ADJACENT, ATTACK_NETWORK, ATTACK_PHYSICAL, attackComplexity CVSSV3Complexity COMPLEXITY_LOW, COMPLEXITY_HIGH, privilegesRequired CVSSV3Privileges PRIVILEGE_NONE, PRIVILEGE_LOW, PRIVILEGE_HIGH, userInteraction CVSSV3UserInteraction UI_NONE, UI_REQUIRED, scope StorageCVSSV3Scope UNCHANGED, CHANGED, confidentiality StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, integrity StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, availability StorageCVSSV3Impact IMPACT_NONE, IMPACT_LOW, IMPACT_HIGH, score Float float severity StorageCVSSV3Severity UNKNOWN, NONE, LOW, MEDIUM, HIGH, CRITICAL, 40.3.7.19. StorageCVSSV3AttackVector Enum Values ATTACK_LOCAL ATTACK_ADJACENT ATTACK_NETWORK ATTACK_PHYSICAL 40.3.7.20. StorageCVSSV3Impact Enum Values IMPACT_NONE IMPACT_LOW IMPACT_HIGH 40.3.7.21. StorageCVSSV3Scope Enum Values UNCHANGED CHANGED 40.3.7.22. StorageCVSSV3Severity Enum Values UNKNOWN NONE LOW MEDIUM HIGH CRITICAL 40.3.7.23. StorageContainerRuntime Enum Values UNKNOWN_CONTAINER_RUNTIME DOCKER_CONTAINER_RUNTIME CRIO_CONTAINER_RUNTIME 40.3.7.24. StorageContainerRuntimeInfo Field Name Required Nullable Type Description Format type StorageContainerRuntime UNKNOWN_CONTAINER_RUNTIME, DOCKER_CONTAINER_RUNTIME, CRIO_CONTAINER_RUNTIME, version String 40.3.7.25. StorageEmbeddedNodeScanComponent Field Name Required Nullable Type Description Format name String version String vulns List of StorageEmbeddedVulnerability vulnerabilities List of StorageNodeVulnerability priority String int64 topCvss Float float riskScore Float float 40.3.7.26. StorageEmbeddedVulnerability Field Name Required Nullable Type Description Format cve String cvss Float float summary String link String fixedBy String scoreVersion StorageEmbeddedVulnerabilityScoreVersion V2, V3, cvssV2 StorageCVSSV2 cvssV3 StorageCVSSV3 publishedOn Date date-time lastModified Date date-time vulnerabilityType EmbeddedVulnerabilityVulnerabilityType UNKNOWN_VULNERABILITY, IMAGE_VULNERABILITY, K8S_VULNERABILITY, ISTIO_VULNERABILITY, NODE_VULNERABILITY, OPENSHIFT_VULNERABILITY, vulnerabilityTypes List of EmbeddedVulnerabilityVulnerabilityType suppressed Boolean suppressActivation Date date-time suppressExpiry Date date-time firstSystemOccurrence Date Time when the CVE was first seen, for this specific distro, in the system. date-time firstImageOccurrence Date Time when the CVE was first seen in this image. date-time severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, state StorageVulnerabilityState OBSERVED, DEFERRED, FALSE_POSITIVE, cvssMetrics List of StorageCVSSScore nvdCvss Float float 40.3.7.27. StorageEmbeddedVulnerabilityScoreVersion V2: No unset for automatic backwards compatibility Enum Values V2 V3 40.3.7.28. StorageNode Field Name Required Nullable Type Description Format id String A unique ID identifying this node. name String The (host)name of the node. Might or might not be the same as ID. taints List of StorageTaint clusterId String clusterName String labels Map of string annotations Map of string joinedAt Date date-time internalIpAddresses List of string externalIpAddresses List of string containerRuntimeVersion String Use container_runtime.version containerRuntime StorageContainerRuntimeInfo kernelVersion String operatingSystem String From NodeInfo. Operating system reported by the node (ex: linux). osImage String From NodeInfo. OS image reported by the node from /etc/os-release. kubeletVersion String kubeProxyVersion String lastUpdated Date date-time k8sUpdated Date Time we received an update from Kubernetes. date-time scan StorageNodeScan components Integer int32 cves Integer int32 fixableCves Integer int32 priority String int64 riskScore Float float topCvss Float float notes List of StorageNodeNote 40.3.7.29. StorageNodeNote Enum Values MISSING_SCAN_DATA 40.3.7.30. StorageNodeScan Field Name Required Nullable Type Description Format scanTime Date date-time operatingSystem String components List of StorageEmbeddedNodeScanComponent notes List of StorageNodeScanNote scannerVersion NodeScanScanner SCANNER, SCANNER_V4, 40.3.7.31. StorageNodeScanNote Enum Values UNSET UNSUPPORTED KERNEL_UNSUPPORTED CERTIFIED_RHEL_CVES_UNAVAILABLE 40.3.7.32. StorageNodeVulnerability Field Name Required Nullable Type Description Format cveBaseInfo StorageCVEInfo cvss Float float severity StorageVulnerabilitySeverity UNKNOWN_VULNERABILITY_SEVERITY, LOW_VULNERABILITY_SEVERITY, MODERATE_VULNERABILITY_SEVERITY, IMPORTANT_VULNERABILITY_SEVERITY, CRITICAL_VULNERABILITY_SEVERITY, fixedBy String snoozed Boolean snoozeStart Date date-time snoozeExpiry Date date-time 40.3.7.33. StorageSource Enum Values SOURCE_UNKNOWN SOURCE_RED_HAT SOURCE_OSV SOURCE_NVD 40.3.7.34. StorageTaint Field Name Required Nullable Type Description Format key String value String taintEffect StorageTaintEffect UNKNOWN_TAINT_EFFECT, NO_SCHEDULE_TAINT_EFFECT, PREFER_NO_SCHEDULE_TAINT_EFFECT, NO_EXECUTE_TAINT_EFFECT, 40.3.7.35. StorageTaintEffect Enum Values UNKNOWN_TAINT_EFFECT NO_SCHEDULE_TAINT_EFFECT PREFER_NO_SCHEDULE_TAINT_EFFECT NO_EXECUTE_TAINT_EFFECT 40.3.7.36. StorageVulnerabilitySeverity Enum Values UNKNOWN_VULNERABILITY_SEVERITY LOW_VULNERABILITY_SEVERITY MODERATE_VULNERABILITY_SEVERITY IMPORTANT_VULNERABILITY_SEVERITY CRITICAL_VULNERABILITY_SEVERITY 40.3.7.37. StorageVulnerabilityState VulnerabilityState indicates if vulnerability is being observed or deferred(/suppressed). By default, it vulnerabilities are observed. OBSERVED: [Default state] Enum Values OBSERVED DEFERRED FALSE_POSITIVE
|
[
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"ScoreVersion can be deprecated ROX-26066",
"Next Tag: 22",
"ScoreVersion can be deprecated ROX-26066",
"Node represents information about a node in the cluster. next available tag: 28",
"Next tag: 5",
"Stream result of v1ExportNodeResponse",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"ScoreVersion can be deprecated ROX-26066",
"Next Tag: 22",
"ScoreVersion can be deprecated ROX-26066",
"Node represents information about a node in the cluster. next available tag: 28",
"Next tag: 5",
"Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }",
"Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }",
"Example 3: Pack and unpack a message in Python.",
"foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)",
"Example 4: Pack and unpack a message in Go",
"foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }",
"package google.profile; message Person { string first_name = 1; string last_name = 2; }",
"{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }",
"{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }",
"ScoreVersion can be deprecated ROX-26066",
"Next Tag: 22",
"ScoreVersion can be deprecated ROX-26066",
"Node represents information about a node in the cluster. next available tag: 28",
"Next tag: 5"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/nodeservice
|
Chapter 59. Replace Field Action
|
Chapter 59. Replace Field Action Replace field with a different key in the message in transit. The required parameter 'renames' is a comma-separated list of colon-delimited renaming pairs like for example 'foo:bar,abc:xyz' and it represents the field rename mappings. The optional parameter 'enabled' represents the fields to include. If specified, only the named fields will be included in the resulting message. The optional parameter 'disabled' represents the fields to exclude. If specified, the listed fields will be excluded from the resulting message. This takes precedence over the 'enabled' parameter. The default value of 'enabled' parameter is 'all', so all the fields of the payload will be included. The default value of 'disabled' parameter is 'none', so no fields of the payload will be excluded. 59.1. Configuration Options The following table summarizes the configuration options available for the replace-field-action Kamelet: Property Name Description Type Default Example renames * Renames Comma separated list of field with new value to be renamed string "foo:bar,c1:c2" disabled Disabled Comma separated list of fields to be disabled string "none" enabled Enabled Comma separated list of fields to be enabled string "all" Note Fields marked with an asterisk (*) are mandatory. 59.2. Dependencies At runtime, the replace-field-action Kamelet relies upon the presence of the following dependencies: github:openshift-integration.kamelet-catalog:camel-kamelets-utils:kamelet-catalog-1.6-SNAPSHOT camel:core camel:jackson camel:kamelet 59.3. Usage This section describes how you can use the replace-field-action . 59.3.1. Knative Action You can use the replace-field-action Kamelet as an intermediate step in a Knative binding. replace-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 59.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 59.3.1.2. Procedure for using the cluster CLI Save the replace-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f replace-field-action-binding.yaml 59.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 59.3.2. Kafka Action You can use the replace-field-action Kamelet as an intermediate step in a Kafka binding. replace-field-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: "foo:bar,c1:c2" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 59.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 59.3.2.2. Procedure for using the cluster CLI Save the replace-field-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f replace-field-action-binding.yaml 59.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step replace-field-action -p "step-0.renames=foo:bar,c1:c2" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 59.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/replace-field-action.kamelet.yaml
|
[
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: \"foo:bar,c1:c2\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f replace-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step replace-field-action -p \"step-0.renames=foo:bar,c1:c2\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: replace-field-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: replace-field-action properties: renames: \"foo:bar,c1:c2\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f replace-field-action-binding.yaml",
"kamel bind timer-source?message=Hello --step replace-field-action -p \"step-0.renames=foo:bar,c1:c2\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/replace-field-action
|
Chapter 4. collectd plugins
|
Chapter 4. collectd plugins You can configure multiple collectd plugins depending on your Red Hat OpenStack Platform (RHOSP) environment. The following list of plugins shows the available heat template ExtraConfig parameters that you can set to override the default values. Each section provides the general configuration name for the ExtraConfig option. For example, if there is a collectd plugin called example_plugin , the format of the plugin title is collectd::plugin::example_plugin . Reference the tables of available parameters for specific plugins, such as in the following example: ExtraConfig: collectd::plugin::example_plugin::<parameter>: <value> Reference the metrics tables of specific plugins for Prometheus or Grafana queries. 4.1. collectd::plugin::aggregation You can aggregate several values into one with the aggregation plugin. Use the aggregation functions such as sum , average , min , and max to calculate metrics, for example average and total CPU statistics. Table 4.1. aggregation parameters Parameter Type host String plugin String plugininstance Integer agg_type String typeinstance String sethost String setplugin String setplugininstance Integer settypeinstance String groupby Array of Strings calculatesum Boolean calculatenum Boolean calculateaverage Boolean calculateminimum Boolean calculatemaximum Boolean calculatestddev Boolean Example configuration: Deploy three aggregate configurations to create the following files: aggregator-calcCpuLoadAvg.conf : average CPU load for all CPU cores grouped by host and state aggregator-calcCpuLoadMinMax.conf : minimum and maximum CPU load groups by host and state aggregator-calcMemoryTotalMaxAvg.conf : maximum, average, and total for memory grouped by type The aggregation configurations use the default cpu and memory plugin configurations. parameter_defaults: CollectdExtraPlugins: - aggregation ExtraConfig: collectd::plugin::aggregation::aggregators: calcCpuLoadAvg: plugin: "cpu" agg_type: "cpu" groupby: - "Host" - "TypeInstance" calculateaverage: True calcCpuLoadMinMax: plugin: "cpu" agg_type: "cpu" groupby: - "Host" - "TypeInstance" calculatemaximum: True calculateminimum: True calcMemoryTotalMaxAvg: plugin: "memory" agg_type: "memory" groupby: - "TypeInstance" calculatemaximum: True calculateaverage: True calculatesum: True 4.2. collectd::plugin::amqp1 Use the amqp1 plugin to write values to an amqp1 message bus, for example, AMQ Interconnect. Table 4.2. amqp1 parameters Parameter Type manage_package Boolean transport String host String port Integer user String password String address String instances Hash retry_delay Integer send_queue_limit Integer interval Integer Use the send_queue_limit parameter to limit the length of the outgoing metrics queue. Note If there is no AMQP1 connection, the plugin continues to queue messages to send, which can result in unbounded memory consumption. The default value is 0, which disables the outgoing metrics queue. Increase the value of the send_queue_limit parameter if metrics are missing. Example configuration: parameter_defaults: CollectdExtraPlugins: - amqp1 ExtraConfig: collectd::plugin::amqp1::send_queue_limit: 5000 4.3. collectd::plugin::apache Use the apache plugin to collect Apache data from the mod_status plugin that is provided by the Apache web server. Each instance provided has a per- interval value specified in seconds. If you provide the timeout interval parameter for an instance, the value is in milliseconds. Table 4.3. apache parameters Parameter Type instances Hash interval Integer manage-package Boolean package_install_options List Table 4.4. apache instances parameters Parameter Type url HTTP URL user String password String verifypeer Boolean verifyhost Boolean cacert AbsolutePath sslciphers String timeout Integer Example configuration: In this example, the instance name is localhost , which connects to the Apache web server at http://10.0.0.111/mod_status?auto . You must append ?auto to the end of the URL to prevent the status page returning as a type that is incompatible with the plugin. parameter_defaults: CollectdExtraPlugins: - apache ExtraConfig: collectd::plugin::apache::instances: localhost: url: "http://10.0.0.111/mod_status?auto" Additional resources For more information about configuring the apache plugin, see apache . 4.4. collectd::plugin::battery Use the battery plugin to report the remaining capacity, power, or voltage of laptop batteries. Table 4.5. battery parameters Parameter Type values_percentage Boolean report_degraded Boolean query_state_fs Boolean interval Integer Additional resources For more information about configuring the battery plugin, see battery . 4.5. collectd::plugin::bind Use the bind plugin to retrieve encoded statistics about queries and responses from a DNS server, and submit those values to collectd. Table 4.6. bind parameters Parameter Type url HTTP URL memorystats Boolean opcodes Boolean parsetime Boolean qtypes Boolean resolverstats Boolean serverstats Boolean zonemaintstats Boolean views Array interval Integer Table 4.7. bind views parameters Parameter Type name String qtypes Boolean resolverstats Boolean cacherrsets Boolean zones List of strings Example configuration: parameter_defaults: CollectdExtraPlugins: - bind ExtraConfig: collectd::plugins::bind: url: http://localhost:8053/ memorystats: true opcodes: true parsetime: false qtypes: true resolverstats: true serverstats: true zonemaintstats: true views: - name: internal qtypes: true resolverstats: true cacherrsets: true - name: external qtypes: true resolverstats: true cacherrsets: true zones: - "example.com/IN" 4.6. collectd::plugin::ceph Use the ceph plugin to gather data from ceph daemons. Table 4.8. ceph parameters Parameter Type daemons Array longrunavglatency Boolean convertspecialmetrictypes Boolean package_name String Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::ceph::daemons: - ceph-osd.0 - ceph-osd.1 - ceph-osd.2 - ceph-osd.3 - ceph-osd.4 Note If an Object Storage Daemon (OSD) is not on every node, you must list the OSDs. When you deploy collectd, the ceph plugin is added to the Ceph nodes. Do not add the ceph plugin on Ceph nodes to CollectdExtraPlugins because this results in a deployment failure. Additional resources For more information about configuring the ceph plugin, see ceph . 4.7. collectd::plugins::cgroups Use the cgroups plugin to collect information for processes in a cgroup. Table 4.9. cgroups parameters Parameter Type ignore_selected Boolean interval Integer cgroups List Additional resources For more information about configuring the cgroups plugin, see cgroups . 4.8. collectd::plugin::connectivity Use the connectivity plugin to monitor the state of network interfaces. Note If no interfaces are listed, all interfaces are monitored by default. Table 4.10. connectivity parameters Parameter Type interfaces Array Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::connectivity::interfaces: - eth0 - eth1 Additional resources For more information about configuring the connectivity plugin, see connectivity . 4.9. collectd::plugin::conntrack Use the conntrack plugin to track the number of entries in the Linux connection-tracking table. There are no parameters for this plugin. 4.10. collectd::plugin::contextswitch Use the ContextSwitch plugin to collect the number of context switches that the system handles. The only parameter available is interval , which is a polling interval defined in seconds. Additional resources For more information about configuring the contextswitch plugin, see contextswitch . 4.11. collectd::plugin::cpu Use the cpu plugin to monitor the time that the CPU spends in various states, for example, idle, executing user code, executing system code, waiting for IO-operations, and other states. The cpu plugin collects jiffies , not percentage values. The value of a jiffy depends on the clock frequency of your hardware platform, and therefore is not an absolute time interval unit. To report a percentage value, set the Boolean parameters reportbycpu and reportbystate to true , and then set the Boolean parameter valuespercentage to true. This plugin is enabled by default. Table 4.11. cpu metrics Name Description Query idle Amount of idle time collectd_cpu_total{...,type_instance='idle'} interrupt CPU blocked by interrupts collectd_cpu_total{...,type_instance='interrupt'} nice Amount of time running low priority processes collectd_cpu_total{...,type_instance='nice'} softirq Amount of cycles spent in servicing interrupt requests collectd_cpu_total{...,type_instance='waitirq'} steal The percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor collectd_cpu_total{...,type_instance='steal'} system Amount of time spent on system level (kernel) collectd_cpu_total{...,type_instance='system'} user Jiffies that user processes use collectd_cpu_total{...,type_instance='user'} wait CPU waiting on outstanding I/O request collectd_cpu_total{...,type_instance='wait'} Table 4.12. cpu parameters Parameter Type Defaults reportbystate Boolean true valuespercentage Boolean true reportbycpu Boolean true reportnumcpu Boolean false reportgueststate Boolean false subtractgueststate Boolean true interval Integer 120 Example configuration: parameter_defaults: CollectdExtraPlugins: - cpu ExtraConfig: collectd::plugin::cpu::reportbystate: true Additional resources For more information about configuring the cpu plugin, see cpu . 4.12. collectd::plugin::cpufreq Use the cpufreq plugin to collect the current CPU frequency. There are no parameters for this plugin. 4.13. collectd::plugin::csv Use the csv plugin to write values to a local file in CSV format. Table 4.13. csv parameters Parameter Type datadir String storerates Boolean interval Integer 4.14. collectd::plugin::df Use the df plugin to collect disk space usage information for file systems. This plugin is enabled by default. Table 4.14. df metrics Name Description Query free Amount of free disk space collectd_df_df_complex{...,type_instance="free"} reserved Amount of reserved disk space collectd_df_df_complex{...,type_instance="reserved"} used Amount of used disk space collectd_df_df_complex{...,type_instance="used"} Table 4.15. df parameters Parameter Type Defaults devices Array [] fstypes Array ['xfs'] ignoreselected Boolean true mountpoints Array [] reportbydevice Boolean true reportinodes Boolean true reportreserved Boolean true valuesabsolute Boolean true valuespercentage Boolean false Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::df::fstypes: ['tmpfs','xfs'] Additional resources For more information about configuring the df plugin, see df . 4.15. collectd::plugin::disk Use the disk plugin to collect performance statistics of hard disks and, if supported, partitions. Note The disk plugin monitors all disks by default. You can use the ignoreselected parameter to ignore a list of disks. The example configuration ignores the sda , sdb , and sdc disks, and monitors all disks not included in the list. This plugin is enabled by default. Table 4.16. disk parameters Parameter Type Defaults disks Array [] ignoreselected Boolean false udevnameattr String <undefined> Table 4.17. disk metrics Name Description merged The number of queued operations that can be merged together, for example, one physical disk access served two or more logical operations. time The average time an I/O-operation takes to complete. The values might not be accurate. io_time Time spent doing I/Os (ms). You can use this metric as a device load percentage. A value of 1 second matches 100% of load. weighted_io_time Measure of both I/O completion time and the backlog that might be accumulating. pending_operations Shows queue size of pending I/O operations. Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::disk::disks: ['sda', 'sdb', 'sdc'] collectd::plugin::disk::ignoreselected: true Additional resources For more information about configuring the disk plugin, see disk . 4.16. collectd::plugin::hugepages Use the hugepages plugin to collect hugepages information. Table 4.18. hugepages parameters Parameter Type Defaults report_per_node_hp Boolean true report_root_hp Boolean true values_pages Boolean true values_bytes Boolean false values_percentage Boolean false Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::hugepages::values_percentage: true Additional resources For more information about configuring the hugepages plugin, see hugepages . 4.17. collectd::plugin::interface Use the interface plugin to measure interface traffic in octets, packets per second, and error rate per second. Table 4.19. interface parameters Parameter Type Default interfaces Array [] ignoreselected Boolean false reportinactive Boolean true Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::interface::interfaces: - lo collectd::plugin::interface::ignoreselected: true Additional resources For more information about configuring the interfaces plugin, see interfaces . 4.18. collectd::plugin::load Use the load plugin to collect the system load and an overview of the system use. Table 4.20. plugin parameters Parameter Type Default report_relative Boolean true Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::load::report_relative: false Additional resources For more information about configuring the load plugin, see load . 4.19. collectd::plugin::mcelog Use the mcelog plugin to send notifications and statistics that are relevant to Machine Check Exceptions when they occur. Configure mcelog to run in daemon mode and enable logging capabilities. Table 4.21. mcelog parameters Parameter Type Mcelogfile String Memory Hash { mcelogclientsocket[string], persistentnotification[boolean] } Example configuration: parameter_defaults: CollectdExtraPlugins: mcelog CollectdEnableMcelog: true Additional resources For more information about configuring the mcelog plugin, see mcelog . 4.20. collectd::plugin::memcached Use the memcached plugin to retrieve information about memcached cache usage, memory, and other related information. Table 4.22. memcached parameters Parameter Type instances Hash interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - memcached ExtraConfig: collectd::plugin::memcached::instances: local: host: "%{hiera('fqdn_canonical')}" port: 11211 Additional resources For more information about configuring the memcached plugin, see memcached . 4.21. collectd::plugin::memory Use the memory plugin to retrieve information about the memory of the system. Table 4.23. memory parameters Parameter Type Defaults valuesabsolute Boolean true valuespercentage Boolean Example configuration: parameter_defaults: ExtraConfig: collectd::plugin::memory::valuesabsolute: true collectd::plugin::memory::valuespercentage: false Additional resources For more information about configuring the memory plugin, see memory . 4.22. collectd::plugin::ntpd Use the ntpd plugin to query a local NTP server that is configured to allow access to statistics, and retrieve information about the configured parameters and the time sync status. Table 4.24. ntpd parameters Parameter Type host Hostname port Port number (Integer) reverselookups Boolean includeunitid Boolean interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - ntpd ExtraConfig: collectd::plugin::ntpd::host: localhost collectd::plugin::ntpd::port: 123 collectd::plugin::ntpd::reverselookups: false collectd::plugin::ntpd::includeunitid: false Additional resources For more information about configuring the ntpd plugin, see ntpd . 4.23. collectd::plugin::ovs_stats Use the ovs_stats plugin to collect statistics of OVS-connected interfaces. The ovs_stats plugin uses the OVSDB management protocol (RFC7047) monitor mechanism to get statistics from OVSDB. Table 4.25. ovs_stats parameters Parameter Type address String bridges List port Integer socket String Example configuration: The following example shows how to enable the ovs_stats plugin. If you deploy your overcloud with OVS, you do not need to enable the ovs_stats plugin. parameter_defaults: CollectdExtraPlugins: - ovs_stats ExtraConfig: collectd::plugin::ovs_stats::socket: '/run/openvswitch/db.sock' Additional resources For more information about configuring the ovs_stats plugin, see ovs_stats . 4.24. collectd::plugin::processes The processes plugin provides information about system processes. If you do not specify custom process matching, the plugin collects only the number of processes by state and the process fork rate. To collect more details about specific processes, you can use the process parameter to specify a process name or the process_match option to specify process names that match a regular expression. The statistics for a process_match output are grouped by process name. Table 4.26. plugin parameters Parameter Type Defaults processes Array <undefined> process_matches Array <undefined> collect_context_switch Boolean <undefined> collect_file_descriptor Boolean <undefined> collect_memory_maps Boolean <undefined> Additional resources For more information about configuring the processes plugin, see [processes]. 4.25. collectd::plugin::smart Use the smart plugin to collect SMART (self-monitoring, analysis and reporting technology) information from physical disks on the node. You must also set the parameter CollectdContainerAdditionalCapAdd to CAP_SYS_RAWIO to allow the smart plugin to read SMART telemetry. If you do not set the CollectdContainerAdditionalCapAdd parameter, the following message is written to the collectd error logs: smart plugin: Running collectd as root, but the CAP_SYS_RAWIO capability is missing. The plugin's read function will probably fail. Is your init system dropping capabilities? . Table 4.27. smart parameters Parameter Type disks Array ignoreselected Boolean interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - smart CollectdContainerAdditionalCapAdd: "CAP_SYS_RAWIO" Additional information For more information about configuring the smart plugin, see smart . 4.26. collectd::plugin::swap Use the swap plugin to collect information about the available and used swap space. Table 4.28. swap parameters Parameter Type reportbydevice Boolean reportbytes Boolean valuesabsolute Boolean valuespercentage Boolean reportio Boolean Example configuration: parameter_defaults: CollectdExtraPlugins: - swap ExtraConfig: collectd::plugin::swap::reportbydevice: false collectd::plugin::swap::reportbytes: true collectd::plugin::swap::valuesabsolute: true collectd::plugin::swap::valuespercentage: false collectd::plugin::swap::reportio: true 4.27. collectd::plugin::tcpconns Use the tcpconns plugin to collect information about the number of TCP connections inbound or outbound from the configured port. The local port configuration represents ingress connections. The remote port configuration represents egress connections. Table 4.29. tcpconns parameters Parameter Type localports Port (Array) remoteports Port (Array) listening Boolean allportssummary Boolean Example configuration: parameter_defaults: CollectdExtraPlugins: - tcpconns ExtraConfig: collectd::plugin::tcpconns::listening: false collectd::plugin::tcpconns::localports: - 22 collectd::plugin::tcpconns::remoteports: - 22 4.28. collectd::plugin::thermal Use the thermal plugin to retrieve ACPI thermal zone information. Table 4.30. thermal parameters Parameter Type devices Array ignoreselected Boolean interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - thermal 4.29. collectd::plugin::uptime Use the uptime plugin to collect information about system uptime. Table 4.31. uptime parameters Parameter Type interval Integer 4.30. collectd::plugin::virt Use the virt plugin to collect CPU, disk, network load, and other metrics through the libvirt API for virtual machines on the host. This plugin is enabled by default on compute hosts. Table 4.32. virt parameters Parameter Type connection String refresh_interval Hash domain String block_device String interface_device String ignore_selected Boolean plugin_instance_format String hostname_format String interface_format String extra_stats String Example configuration: ExtraConfig: collectd::plugin::virt::hostname_format: "name uuid hostname" collectd::plugin::virt::plugin_instance_format: metadata Additional resources For more information about configuring the virt plugin, see virt . 4.31. collectd::plugin::vmem Use the vmem plugin to collect information about virtual memory from the kernel subsystem. Table 4.33. vmem parameters Parameter Type verbose Boolean interval Integer Example configuration: parameter_defaults: CollectdExtraPlugins: - vmem ExtraConfig: collectd::plugin::vmem::verbose: true 4.32. collectd::plugin::write_http Use the write_http output plugin to submit values to an HTTP server by using POST requests and encoding metrics with JSON, or by using the PUTVAL command. Table 4.34. write_http parameters Parameter Type ensure Enum[ present , absent ] nodes Hash[String, Hash[String, Scalar]] urls Hash[String, Hash[String, Scalar]] manage_package Boolean Example configuration: parameter_defaults: CollectdExtraPlugins: - write_http ExtraConfig: collectd::plugin::write_http::nodes: collectd: url: "http://collectd.tld.org/collectd" metrics: true header: "X-Custom-Header: custom_value" Additional resources For more information about configuring the write_http plugin, see write_http . 4.33. collectd::plugin::write_kafka Use the write_kafka plugin to send values to a Kafka topic. Configure the write_kafka plugin with one or more topic blocks. For each topic block, you must specify a unique name and one Kafka producer. You can use the following per-topic parameters inside the topic block: Table 4.35. write_kafka parameters Parameter Type kafka_hosts Array[String] topics Hash properties Hash meta Hash Example configuration: parameter_defaults: CollectdExtraPlugins: - write_kafka ExtraConfig: collectd::plugin::write_kafka::kafka_hosts: - remote.tld:9092 collectd::plugin::write_kafka::topics: mytopic: format: JSON Additional resources: For more information about how to configure the write_kafka plugin, see write_kafka . 4.34. Unsupported collectd plugins Warning The following plugins have undefined use cases for many Red Hat OpenStack Platform (RHOSP) environments, and are therefore unsupported. Table 4.36. Unsupported collectd plugins Parameter Notes cURL You can use this plugin to read files with libcurl and then parse the files according to the configuration. cURL-JSON You can use this plugin to query JSON data using the cURL library and parse the data according to the configuration. DNS You can use this plugin to interpret the packets and collect statistics of your DNS traffic on UDP port 53. Entropy You can use this plugin to report the available entropy on a system. Ethstat You can use this plugin to read performance statistics from the Ethernet cards. Exec You can use this plugin to execute scripts or applications and print to STDOUT. fhcount You can use this plugin to provide statistics about used, unused, and total number of file handles on Linux. FileCount You can use this plugin to count the number of files in a directory and its subdirectories. FSCache You can use this plugin to collect information about the file-system-based caching infrastructure for network file systems and other slow media. HDDTemp You can use this plugin to collect the temperature of hard disks. IntelRDT You can use this plugin to collect information provided by monitoring features of Intel Resource Director Technology. IPMI You can use this plugin to read hardware sensors from servers using the Intelligent Platform Management Interface (IPMI). IRQ You can use this plugin to collect the number of times each interrupt is handled by the operating system. LogFile You can use this plugin to write to a log file. MySQL You can use this plugin to connect to a MySQL database and issue a SHOW STATS command periodically. The command returns the server status variables, many of which are collected. Netlink You can use this plugin to get statistics for interfaces, qdiscs, classes, or filters. Network You can use this plugin to interact with other collectd instances. NFS You can use this plugin to get information about the usage of the Network File System (NFS), version 2, 3, and 4. numa You can use this plugin to report statistics of the Non-Uniform Memory Access (NUMA) subsystem of Linux. OpenLDAP You can use this plugin to report the status of OpenLDAP. OpenVPN You can use this plugin to read the status file printed by OpenVPN. OVS Events You can use this plugin to monitor the link status of OpenvSwitch-connected interfaces and send notifications when the link state change occurs. PCIe Errors You can use this plugin to monitor and report PCI Express errors. Ping You can use this plugin to measure network latency using Internet Control Message Protocol (ICMP) echo requests. procevent You can use this plugin to monitor process starts and exits. Python You can use this plugin to bind to python. Sensors You can use this plugin to read hardware sensors using lm-sensors . Serial You can use this plugin to collect traffic on the serial interface. SNMP You can use this plugin to read values from network devices using the Simple Network Management Protocol (SNMP). SNMP Agent You can use this plugin to handle queries from the principal SNMP agent and return the data collected by read plugins. StatsD You can use this plugin to implement the StatsD network protocol to allow clients to report events. sysevent You can use this plugin to listen for incoming rsyslog messages on a network socket. SysLog You can use this plugin to receive log messages from the collectd daemon and dispatch the messages to syslog. Table You can use this plugin to parse plain text files in table format. Tail You can use this plugin to tail log files, in a similar way as the tail -F command. Each line is given to one or more matches that test if the line is relevant for any statistics using a posix extended regular expression. Tail CSV You can use this plugin to tail CSV files. threshold You can use this plugin to generate notifications on given thresholds. turbostat You can use this plugin to read CPU frequency and C-state residency on modern Intel turbo-capable processors. UnixSock You can use this plugin to communicate with the collectd daemon. Users You can use this plugin to count the number of users currently logged into the system. UUID You can use this plugin to determine the uniquie identifier (UUID) of the system that it is running on. Write Graphite You can use this plugin to store values in Carbon , which is the storage layer of Graphite . Write HTTP You can use this plugin to send the values collected by collectd to a web server using HTTP POST requests. Write Log You can use this plugin to write metrics as INFO log messages. Write Prometheus You can use this plugin to implement a web server that can be scraped by Prometheus.
|
[
"ExtraConfig: collectd::plugin::example_plugin::<parameter>: <value>",
"parameter_defaults: CollectdExtraPlugins: - aggregation ExtraConfig: collectd::plugin::aggregation::aggregators: calcCpuLoadAvg: plugin: \"cpu\" agg_type: \"cpu\" groupby: - \"Host\" - \"TypeInstance\" calculateaverage: True calcCpuLoadMinMax: plugin: \"cpu\" agg_type: \"cpu\" groupby: - \"Host\" - \"TypeInstance\" calculatemaximum: True calculateminimum: True calcMemoryTotalMaxAvg: plugin: \"memory\" agg_type: \"memory\" groupby: - \"TypeInstance\" calculatemaximum: True calculateaverage: True calculatesum: True",
"parameter_defaults: CollectdExtraPlugins: - amqp1 ExtraConfig: collectd::plugin::amqp1::send_queue_limit: 5000",
"parameter_defaults: CollectdExtraPlugins: - apache ExtraConfig: collectd::plugin::apache::instances: localhost: url: \"http://10.0.0.111/mod_status?auto\"",
"parameter_defaults: CollectdExtraPlugins: - bind ExtraConfig: collectd::plugins::bind: url: http://localhost:8053/ memorystats: true opcodes: true parsetime: false qtypes: true resolverstats: true serverstats: true zonemaintstats: true views: - name: internal qtypes: true resolverstats: true cacherrsets: true - name: external qtypes: true resolverstats: true cacherrsets: true zones: - \"example.com/IN\"",
"parameter_defaults: ExtraConfig: collectd::plugin::ceph::daemons: - ceph-osd.0 - ceph-osd.1 - ceph-osd.2 - ceph-osd.3 - ceph-osd.4",
"parameter_defaults: ExtraConfig: collectd::plugin::connectivity::interfaces: - eth0 - eth1",
"parameter_defaults: CollectdExtraPlugins: - cpu ExtraConfig: collectd::plugin::cpu::reportbystate: true",
"parameter_defaults: ExtraConfig: collectd::plugin::df::fstypes: ['tmpfs','xfs']",
"parameter_defaults: ExtraConfig: collectd::plugin::disk::disks: ['sda', 'sdb', 'sdc'] collectd::plugin::disk::ignoreselected: true",
"This plugin is enabled by default.",
"parameter_defaults: ExtraConfig: collectd::plugin::hugepages::values_percentage: true",
"This plugin is enabled by default.",
"parameter_defaults: ExtraConfig: collectd::plugin::interface::interfaces: - lo collectd::plugin::interface::ignoreselected: true",
"This plugin is enabled by default.",
"parameter_defaults: ExtraConfig: collectd::plugin::load::report_relative: false",
"parameter_defaults: CollectdExtraPlugins: mcelog CollectdEnableMcelog: true",
"parameter_defaults: CollectdExtraPlugins: - memcached ExtraConfig: collectd::plugin::memcached::instances: local: host: \"%{hiera('fqdn_canonical')}\" port: 11211",
"This plugin is enabled by default.",
"parameter_defaults: ExtraConfig: collectd::plugin::memory::valuesabsolute: true collectd::plugin::memory::valuespercentage: false",
"parameter_defaults: CollectdExtraPlugins: - ntpd ExtraConfig: collectd::plugin::ntpd::host: localhost collectd::plugin::ntpd::port: 123 collectd::plugin::ntpd::reverselookups: false collectd::plugin::ntpd::includeunitid: false",
"parameter_defaults: CollectdExtraPlugins: - ovs_stats ExtraConfig: collectd::plugin::ovs_stats::socket: '/run/openvswitch/db.sock'",
"parameter_defaults: CollectdExtraPlugins: - smart CollectdContainerAdditionalCapAdd: \"CAP_SYS_RAWIO\"",
"parameter_defaults: CollectdExtraPlugins: - swap ExtraConfig: collectd::plugin::swap::reportbydevice: false collectd::plugin::swap::reportbytes: true collectd::plugin::swap::valuesabsolute: true collectd::plugin::swap::valuespercentage: false collectd::plugin::swap::reportio: true",
"parameter_defaults: CollectdExtraPlugins: - tcpconns ExtraConfig: collectd::plugin::tcpconns::listening: false collectd::plugin::tcpconns::localports: - 22 collectd::plugin::tcpconns::remoteports: - 22",
"parameter_defaults: CollectdExtraPlugins: - thermal",
"This plugin is enabled by default.",
"ExtraConfig: collectd::plugin::virt::hostname_format: \"name uuid hostname\" collectd::plugin::virt::plugin_instance_format: metadata",
"parameter_defaults: CollectdExtraPlugins: - vmem ExtraConfig: collectd::plugin::vmem::verbose: true",
"parameter_defaults: CollectdExtraPlugins: - write_http ExtraConfig: collectd::plugin::write_http::nodes: collectd: url: \"http://collectd.tld.org/collectd\" metrics: true header: \"X-Custom-Header: custom_value\"",
"parameter_defaults: CollectdExtraPlugins: - write_kafka ExtraConfig: collectd::plugin::write_kafka::kafka_hosts: - remote.tld:9092 collectd::plugin::write_kafka::topics: mytopic: format: JSON"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_overcloud_observability/collectd-plugins_assembly
|
Chapter 9. Storage
|
Chapter 9. Storage 9.1. Storage configuration overview You can configure a default storage class, storage profiles, Containerized Data Importer (CDI), data volumes, and automatic boot source updates. 9.1.1. Storage The following storage configuration tasks are mandatory: Configure a default storage class You must configure a default storage class for your cluster. Otherwise, the cluster cannot receive automated boot source updates. Configure storage profiles You must configure storage profiles if your storage provider is not recognized by CDI. A storage profile provides recommended storage settings based on the associated storage class. The following storage configuration tasks are optional: Reserve additional PVC space for file system overhead By default, 5.5% of a file system PVC is reserved for overhead, reducing the space available for VM disks by that amount. You can configure a different overhead value. Configure local storage by using the hostpath provisioner You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the HPP Operator is automatically installed. Configure user permissions to clone data volumes between namespaces You can configure RBAC roles to enable users to clone data volumes between namespaces. 9.1.2. Containerized Data Importer You can perform the following Containerized Data Importer (CDI) configuration tasks: Override the resource request limits of a namespace You can configure CDI to import, upload, and clone VM disks into namespaces that are subject to CPU and memory resource restrictions. Configure CDI scratch space CDI requires scratch space (temporary storage) to complete some operations, such as importing and uploading VM images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). 9.1.3. Data volumes You can perform the following data volume configuration tasks: Enable preallocation for data volumes CDI can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes. Manage data volume annotations Data volume annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods. 9.1.4. Boot source updates You can perform the following boot source update configuration task: Manage automatic boot source updates Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, CDI imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources. You can enable automatic updates for custom boot sources. 9.2. Configuring storage profiles A storage profile provides recommended storage settings based on the associated storage class. A storage profile is allocated for each storage class. The Containerized Data Importer (CDI) recognizes a storage provider if it has been configured to identify and interact with the storage provider's capabilities. For recognized storage types, the CDI provides values that optimize the creation of PVCs. You can also configure automatic settings for the storage class by customizing the storage profile. If the CDI does not recognize your storage provider, you must configure storage profiles. Important When using OpenShift Virtualization with Red Hat OpenShift Data Foundation, specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs. To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block . 9.2.1. Customizing the storage profile You can specify default parameters by editing the StorageProfile object for the provisioner's storage class. These default parameters only apply to the persistent volume claim (PVC) if they are not configured in the DataVolume object. You cannot modify storage class parameters. To make changes, delete and re-create the storage class. You must then reapply any customizations that were previously made to the storage profile. An empty status section in a storage profile indicates that a storage provisioner is not recognized by the Containerized Data Interface (CDI). Customizing a storage profile is necessary if you have a storage provisioner that is not recognized by CDI. In this case, the administrator sets appropriate values in the storage profile to ensure successful allocations. Warning If you create a data volume and omit YAML attributes and these attributes are not defined in the storage profile, then the requested storage will not be allocated and the underlying persistent volume claim (PVC) will not be created. Prerequisites Ensure that your planned configuration is supported by the storage class and its provider. Specifying an incompatible configuration in a storage profile causes volume provisioning to fail. Procedure Edit the storage profile. In this example, the provisioner is not recognized by CDI. USD oc edit storageprofile <storage_class> Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> Provide the needed attribute values in the storage profile: Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class> 1 The accessModes that you select. 2 The volumeMode that you select. After you save your changes, the selected values appear in the storage profile status element. 9.2.1.1. Setting a default cloning strategy using a storage profile You can use storage profiles to set a default cloning method for a storage class, creating a cloning strategy . Setting cloning strategies can be helpful, for example, if your storage vendor only supports certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance. Cloning strategies can be specified by setting the cloneStrategy attribute in a storage profile to one of these values: snapshot is used by default when snapshots are configured. The CDI will use the snapshot method if it recognizes the storage provider and the provider supports Container Storage Interface (CSI) snapshots. This cloning strategy uses a temporary volume snapshot to clone the volume. copy uses a source pod and a target pod to copy data from the source volume to the target volume. Host-assisted cloning is the least efficient method of cloning. csi-clone uses the CSI clone API to efficiently clone an existing volume without using an interim volume snapshot. Unlike snapshot or copy , which are used by default if no storage profile is defined, CSI volume cloning is only used when you specify it in the StorageProfile object for the provisioner's storage class. Note You can also set clone strategies using the CLI without modifying the default claimPropertySets in your YAML spec section. Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> # ... spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class> 1 Specify the access mode. 2 Specify the volume mode. 3 Specify the default cloning strategy. Table 9.1. Storage providers and default behaviors Storage provider Default behavior rook-ceph.rbd.csi.ceph.com Snapshot openshift-storage.rbd.csi.ceph.com Snapshot csi-vxflexos.dellemc.com CSI Clone csi-isilon.dellemc.com CSI Clone csi-powermax.dellemc.com CSI Clone csi-powerstore.dellemc.com CSI Clone hspc.csi.hitachi.com CSI Clone csi.hpe.com CSI Clone spectrumscale.csi.ibm.com CSI Clone rook-ceph.rbd.csi.ceph.com CSI Clone openshift-storage.rbd.csi.ceph.com CSI Clone cephfs.csi.ceph.com CSI Clone openshift-storage.cephfs.csi.ceph.com CSI Clone 9.3. Managing automatic boot source updates You can manage automatic updates for the following boot sources: All Red Hat boot sources All custom boot sources Individual Red Hat or custom boot sources Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources. 9.3.1. Managing Red Hat boot source updates You can opt out of automatic updates for all system-defined boot sources by disabling the enableCommonBootImageImport feature gate. If you disable this feature gate, all DataImportCron objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually. When the enableCommonBootImageImport feature gate is disabled, DataSource objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by creating a new persistent volume claim (PVC) or volume snapshot for the DataSource object, then populating it with an operating system image. 9.3.1.1. Managing automatic updates for all system-defined boot sources Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents CDIDataImportCronOutdated alerts from filling up logs. To disable automatic updates for all system-defined boot sources, turn off the enableCommonBootImageImport feature gate by setting the value to false . Setting this value to true re-enables the feature gate and turns automatic updates back on. Note Custom boot sources are not affected by this setting. Procedure Toggle the feature gate for automatic boot source updates by editing the HyperConverged custom resource (CR). To disable automatic boot source updates, set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged CR to false . For example: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": false}]' To re-enable automatic boot source updates, set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged CR to true . For example: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type json -p '[{"op": "replace", "path": \ "/spec/featureGates/enableCommonBootImageImport", \ "value": true}]' 9.3.2. Managing custom boot source updates Custom boot sources that are not provided by OpenShift Virtualization are not controlled by the feature gate. You must manage them individually by editing the HyperConverged custom resource (CR). Important You must configure a storage class. Otherwise, the cluster cannot receive automated updates for custom boot sources. See Defining a storage class for details. 9.3.2.1. Configuring a storage class for custom boot source updates You can override the default storage class by editing the HyperConverged custom resource (CR). Important Boot sources are created from storage using the default storage class. If your cluster does not have a default storage class, you must define one before configuring automatic updates for custom boot sources. Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Define a new storage class by entering a value in the storageClassName field: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <new_storage_class> 1 schedule: "0 */12 * * *" 2 managedDataSource: <data_source> 3 # ... 1 Define the storage class. 2 Required: Schedule for the job specified in cron format. 3 Required: The data source to use. Remove the storageclass.kubernetes.io/is-default-class annotation from the current default storage class. Retrieve the name of the current default storage class by running the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11d 1 1 In this example, the current default storage class is named hostpath-csi-basic . Remove the annotation from the current default storage class by running the following command: USD oc patch storageclass <current_default_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 1 1 Replace <current_default_storage_class> with the storageClassName value of the default storage class. Set the new storage class as the default by running the following command: USD oc patch storageclass <new_storage_class> -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 1 1 Replace <new_storage_class> with the storageClassName value that you added to the HyperConverged CR. 9.3.2.2. Enabling automatic updates for custom boot sources OpenShift Virtualization automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the HyperConverged custom resource (CR). Prerequisites The cluster has a default storage class. Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the HyperConverged CR, adding the appropriate template and boot source in the dataImportCronTemplates section. For example: Example custom resource apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" 1 labels: instancetype.kubevirt.io/default-preference: centos.7 instancetype.kubevirt.io/default-instancetype: u1.medium spec: schedule: "0 */12 * * *" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos7 4 1 This annotation is required for storage classes with volumeBindingMode set to WaitForFirstConsumer . 2 Schedule for the job specified in cron format. 3 Use to create a data volume from a registry source. Use the default pod pullMethod and not node pullMethod , which is based on the node docker cache. The node docker cache is useful when a registry image is available via Container.Image , but the CDI importer is not authorized to access it. 4 For the custom image to be detected as an available boot source, the name of the image's managedDataSource must match the name of the template's DataSource , which is found under spec.dataVolumeTemplates.spec.sourceRef.name in the VM template YAML file. Save the file. 9.3.2.3. Enabling volume snapshot boot sources Enable volume snapshot boot sources by setting the parameter in the StorageProfile associated with the storage class that stores operating system base images. Although DataImportCron was originally designed to maintain only PVC sources, VolumeSnapshot sources scale better than PVC sources for certain storage types. Note Use volume snapshots on a storage profile that is proven to scale better when cloning from a single snapshot. Prerequisites You must have access to a volume snapshot with the operating system image. The storage must support snapshotting. Procedure Open the storage profile object that corresponds to the storage class used to provision boot sources by running the following command: USD oc edit storageprofile <storage_class> Review the dataImportCronSourceFormat specification of the StorageProfile to confirm whether or not the VM is using PVC or volume snapshot by default. Edit the storage profile, if needed, by updating the dataImportCronSourceFormat specification to snapshot . Example storage profile apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: # ... spec: dataImportCronSourceFormat: snapshot Verification Open the storage profile object that corresponds to the storage class used to provision boot sources. USD oc get storageprofile <storage_class> -oyaml Confirm that the dataImportCronSourceFormat specification of the StorageProfile is set to 'snapshot', and that any DataSource objects that the DataImportCron points to now reference volume snapshots. You can now use these boot sources to create virtual machines. 9.3.3. Disabling automatic updates for a single boot source You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Disable automatic updates for an individual boot source by editing the spec.dataImportCronTemplates field. Custom boot source Remove the boot source from the spec.dataImportCronTemplates field. Automatic updates are disabled for custom boot sources by default. System-defined boot source Add the boot source to spec.dataImportCronTemplates . Note Automatic updates are enabled by default for system-defined boot sources, but these boot sources are not listed in the CR unless you add them. Set the value of the dataimportcrontemplate.kubevirt.io/enable annotation to 'false' . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron # ... Save the file. 9.3.4. Verifying the status of a boot source You can determine if a boot source is system-defined or custom by viewing the HyperConverged custom resource (CR). Procedure View the contents of the HyperConverged CR by running the following command: USD oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml Example output apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: # ... status: # ... dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 # ... - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} 2 # ... 1 Indicates a system-defined boot source. 2 Indicates a custom boot source. Verify the status of the boot source by reviewing the status.dataImportCronTemplates.status field. If the field contains commonTemplate: true , it is a system-defined boot source. If the status.dataImportCronTemplates.status field has the value {} , it is a custom boot source. 9.4. Reserving PVC space for file system overhead When you add a virtual machine disk to a persistent volume claim (PVC) that uses the Filesystem volume mode, you must ensure that there is enough space on the PVC for the VM disk and for file system overhead, such as metadata. By default, OpenShift Virtualization reserves 5.5% of the PVC space for overhead, reducing the space available for virtual machine disks by that amount. You can configure a different overhead value by editing the HCO object. You can change the value globally and you can specify values for specific storage classes. 9.4.1. Overriding the default file system overhead value Change the amount of persistent volume claim (PVC) space that the OpenShift Virtualization reserves for file system overhead by editing the spec.filesystemOverhead attribute of the HCO object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Open the HCO object for editing by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the spec.filesystemOverhead fields, populating them with your chosen values: # ... spec: filesystemOverhead: global: "<new_global_value>" 1 storageClass: <storage_class_name>: "<new_value_for_this_storage_class>" 2 1 The default file system overhead percentage used for any storage classes that do not already have a set value. For example, global: "0.07" reserves 7% of the PVC for file system overhead. 2 The file system overhead percentage for the specified storage class. For example, mystorageclass: "0.04" changes the default overhead value for PVCs in the mystorageclass storage class to 4%. Save and exit the editor to update the HCO object. Verification View the CDIConfig status and verify your changes by running one of the following commands: To generally verify changes to CDIConfig : USD oc get cdiconfig -o yaml To view your specific changes to CDIConfig : USD oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}' 9.5. Configuring local storage by using the hostpath provisioner You can configure local storage for virtual machines by using the hostpath provisioner (HPP). When you install the OpenShift Virtualization Operator, the Hostpath Provisioner Operator is automatically installed. HPP is a local storage provisioner designed for OpenShift Virtualization that is created by the Hostpath Provisioner Operator. To use HPP, you create an HPP custom resource (CR) with a basic storage pool. 9.5.1. Creating a hostpath provisioner with a basic storage pool You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a storagePools stanza. The storage pool specifies the name and path used by the CSI driver. Important Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. Prerequisites The directories specified in spec.storagePools.path must have read/write access. Procedure Create an hpp_cr.yaml file with a storagePools stanza as in the following example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: "/var/myvolumes" 2 workload: nodeSelector: kubernetes.io/os: linux 1 The storagePools stanza is an array to which you can add multiple entries. 2 Specify the storage pool directories under this node path. Save the file and exit. Create the HPP by running the following command: USD oc create -f hpp_cr.yaml 9.5.1.1. About creating storage classes When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the storagePools stanza. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a pod is created using the PVC. 9.5.1.2. Creating a storage class for the CSI driver with the storagePools stanza To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver. When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a StorageClass object's parameters after you create it. Note Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While a disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the StorageClass value with volumeBindingMode parameter set to WaitForFirstConsumer , the binding and provisioning of the PV is delayed until a pod is created using the PVC. Procedure Create a storageclass_csi.yaml file to define the storage class: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3 1 The two possible reclaimPolicy values are Delete and Retain . If you do not specify a value, the default value is Delete . 2 The volumeBindingMode parameter determines when dynamic provisioning and volume binding occur. Specify WaitForFirstConsumer to delay the binding and provisioning of a persistent volume (PV) until after a pod that uses the persistent volume claim (PVC) is created. This ensures that the PV meets the pod's scheduling requirements. 3 Specify the name of the storage pool defined in the HPP CR. Save the file and exit. Create the StorageClass object by running the following command: USD oc create -f storageclass_csi.yaml 9.5.2. About storage pools created with PVC templates If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR). A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation. The PVC template is based on the spec stanza of the PersistentVolumeClaim object: Example PersistentVolumeClaim object apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 1 This value is only required for block volume mode PVs. You define a storage pool using a pvcTemplate specification in the HPP CR. The Operator creates a PVC from the pvcTemplate specification for each node containing the HPP CSI driver. The PVC created from the PVC template consumes the single large PV, allowing the HPP to create smaller dynamic volumes. You can combine basic storage pools with storage pools created from PVC templates. 9.5.2.1. Creating a storage pool with a PVC template You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR). Important Do not create storage pools in the same partition as the operating system. Otherwise, the operating system partition might become filled to capacity, which will impact performance or cause the node to become unstable or unusable. Prerequisites The directories specified in spec.storagePools.path must have read/write access. Procedure Create an hpp_pvc_template_pool.yaml file for the HPP CR that specifies a persistent volume (PVC) template in the storagePools stanza according to the following example: apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: "/var/myvolumes" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux 1 The storagePools stanza is an array that can contain both basic and PVC template storage pools. 2 Specify the storage pool directories under this node path. 3 Optional: The volumeMode parameter can be either Block or Filesystem as long as it matches the provisioned volume format. If no value is specified, the default is Filesystem . If the volumeMode is Block , the mounting pod creates an XFS file system on the block volume before mounting it. 4 If the storageClassName parameter is omitted, the default storage class is used to create PVCs. If you omit storageClassName , ensure that the HPP storage class is not the default storage class. 5 You can specify statically or dynamically provisioned storage. In either case, ensure the requested storage size is appropriate for the volume you want to virtually divide or the PVC cannot be bound to the large PV. If the storage class you are using uses dynamically provisioned storage, pick an allocation size that matches the size of a typical request. Save the file and exit. Create the HPP with a storage pool by running the following command: USD oc create -f hpp_pvc_template_pool.yaml 9.6. Enabling user permissions to clone data volumes across namespaces The isolating nature of namespaces means that users cannot by default clone resources between namespaces. To enable a user to clone a virtual machine to another namespace, a user with the cluster-admin role must create a new cluster role. Bind this cluster role to a user to enable them to clone virtual machines to the destination namespace. 9.6.1. Creating RBAC resources for cloning data volumes Create a new cluster role that enables permissions for all actions for the datavolumes resource. Prerequisites You must have cluster admin privileges. Procedure Create a ClusterRole manifest: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: ["cdi.kubevirt.io"] resources: ["datavolumes/source"] verbs: ["*"] 1 Unique name for the cluster role. Create the cluster role in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the ClusterRole manifest created in the step. Create a RoleBinding manifest that applies to both the source and destination namespaces and references the cluster role created in the step. apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io 1 Unique name for the role binding. 2 The namespace for the source data volume. 3 The namespace to which the data volume is cloned. 4 The name of the cluster role created in the step. Create the role binding in the cluster: USD oc create -f <datavolume-cloner.yaml> 1 1 The file name of the RoleBinding manifest created in the step. 9.7. Configuring CDI to override CPU and memory quotas You can configure the Containerized Data Importer (CDI) to import, upload, and clone virtual machine disks into namespaces that are subject to CPU and memory resource restrictions. 9.7.1. About CPU and memory quotas in a namespace A resource quota , defined by the ResourceQuota object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace. The HyperConverged custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of 0 . This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota. 9.7.2. Overriding CPU and memory defaults Modify the default settings for CPU and memory requests and limits for your use case by adding the spec.resourceRequirements.storageWorkloads stanza to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the spec.resourceRequirements.storageWorkloads stanza to the CR, setting the values based on your use case. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: "500m" memory: "2Gi" requests: cpu: "250m" memory: "1Gi" Save and exit the editor to update the HyperConverged CR. 9.7.3. Additional resources Resource quotas per project 9.8. Preparing CDI scratch space 9.8.1. About scratch space The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). The scratch space PVC is deleted after the operation completes or aborts. You can define the storage class that is used to bind the scratch space PVC in the spec.scratchSpaceStorageClass field of the HyperConverged custom resource. If the defined storage class does not match a storage class in the cluster, then the default storage class defined for the cluster is used. If there is no default storage class defined in the cluster, the storage class used to provision the original DV or PVC is used. Note CDI requires requesting scratch space with a file volume mode, regardless of the PVC backing the origin data volume. If the origin PVC is backed by block volume mode, you must define a storage class capable of provisioning file volume mode PVCs. Manual provisioning If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. If there are no PVCs that match these requirements, the CDI import pod remains in a Pending state until an appropriate PVC is made available or until a timeout function kills the pod. 9.8.2. CDI operations that require scratch space Type Reason Registry imports CDI must download the image to a scratch space and extract the layers to find the image file. The image file is then passed to QEMU-IMG for conversion to a raw disk. Upload image QEMU-IMG does not accept input from STDIN. Instead, the image to upload is saved in scratch space before it can be passed to QEMU-IMG for conversion. HTTP imports of archived images QEMU-IMG does not know how to handle the archive formats CDI supports. Instead, the image is unarchived and saved into scratch space before it is passed to QEMU-IMG. HTTP imports of authenticated images QEMU-IMG inadequately handles authentication. Instead, the image is saved to scratch space and authenticated before it is passed to QEMU-IMG. HTTP imports of custom certificates QEMU-IMG inadequately handles custom certificates of HTTPS endpoints. Instead, CDI downloads the image to scratch space before passing the file to QEMU-IMG. 9.8.3. Defining a storage class You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the spec.scratchSpaceStorageClass field to the HyperConverged custom resource (CR). Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the spec.scratchSpaceStorageClass field to the CR, setting the value to the name of a storage class that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: "<storage_class>" 1 1 If you do not specify a storage class, CDI uses the storage class of the persistent volume claim that is being populated. Save and exit your default editor to update the HyperConverged CR. 9.8.4. CDI supported operations matrix This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. Content types HTTP HTTPS HTTP basic auth Registry Upload KubeVirt (QCOW2) [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2** [✓] GZ* [✓] XZ* [✓] QCOW2 [✓] GZ* [✓] XZ* [✓] QCOW2* □ GZ □ XZ [✓] QCOW2* [✓] GZ* [✓] XZ* KubeVirt (RAW) [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW [✓] GZ [✓] XZ [✓] RAW* □ GZ □ XZ [✓] RAW* [✓] GZ* [✓] XZ* [✓] Supported operation □ Unsupported operation * Requires scratch space ** Requires scratch space if a custom certificate authority is required 9.8.5. Additional resources Dynamic provisioning 9.9. Using preallocation for data volumes The Containerized Data Importer can preallocate disk space to improve write performance when creating data volumes. You can enable preallocation for specific data volumes. 9.9.1. About preallocation The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes. If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type: fallocate If the file system supports it, CDI uses the operating system's fallocate call to preallocate space by using the posix_fallocate function, which allocates blocks and marks them as uninitialized. full If fallocate mode cannot be used, full mode allocates space for the image by writing data to the underlying storage. Depending on the storage location, all the empty allocated space might be zeroed. 9.9.2. Enabling preallocation for a data volume You can enable preallocation for specific data volumes by including the spec.preallocation field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI ( oc ). Preallocation mode is supported for all CDI source types. Procedure Specify the spec.preallocation field in the data volume manifest: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true # ... 1 All CDI source types support preallocation. However, preallocation is ignored for cloning operations. 2 Specify the URL of the data source in your registry. 9.10. Managing data volume annotations Data volume (DV) annotations allow you to manage pod behavior. You can add one or more annotations to a data volume, which then propagates to the created importer pods. 9.10.1. Example: Data volume annotations This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The v1.multus-cni.io/default-network: bridge-network annotation causes the pod to use the multus network named bridge-network as its default network. If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the k8s.v1.cni.cncf.io/networks: <network_name> annotation. Multus network annotation example apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: datavolume-example annotations: v1.multus-cni.io/default-network: bridge-network 1 # ... 1 Multus network annotation
|
[
"oc edit storageprofile <storage_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: {} status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <unknown_provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 status: provisioner: <unknown_provisioner> storageClass: <unknown_provisioner_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: name: <provisioner_class> spec: claimPropertySets: - accessModes: - ReadWriteOnce 1 volumeMode: Filesystem 2 cloneStrategy: csi-clone 3 status: provisioner: <provisioner> storageClass: <provisioner_class>",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": false}]'",
"oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/enableCommonBootImageImport\", \"value\": true}]'",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: rhel8-image-cron spec: template: spec: storageClassName: <new_storage_class> 1 schedule: \"0 */12 * * *\" 2 managedDataSource: <data_source> 3",
"For the custom image to be detected as an available boot source, the value of the `spec.dataVolumeTemplates.spec.sourceRef.name` parameter in the VM template must match this value.",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11d 1",
"oc patch storageclass <current_default_storage_class> -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}' 1",
"oc patch storageclass <new_storage_class> -p '{\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}' 1",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: name: centos7-image-cron annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" 1 labels: instancetype.kubevirt.io/default-preference: centos.7 instancetype.kubevirt.io/default-instancetype: u1.medium spec: schedule: \"0 */12 * * *\" 2 template: spec: source: registry: 3 url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi garbageCollect: Outdated managedDataSource: centos7 4",
"oc edit storageprofile <storage_class>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: StorageProfile metadata: spec: dataImportCronSourceFormat: snapshot",
"oc get storageprofile <storage_class> -oyaml",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: dataImportCronTemplates: - metadata: annotations: dataimportcrontemplate.kubevirt.io/enable: 'false' name: rhel8-image-cron",
"oc get hyperconverged kubevirt-hyperconverged -n openshift-cnv -o yaml",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: status: dataImportCronTemplates: - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: centos-7-image-cron spec: garbageCollect: Outdated managedDataSource: centos7 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: url: docker://quay.io/containerdisks/centos:7-2009 storage: resources: requests: storage: 30Gi status: {} status: commonTemplate: true 1 - metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\" name: user-defined-dic spec: garbageCollect: Outdated managedDataSource: user-defined-centos-stream8 schedule: 55 8/12 * * * template: metadata: {} spec: source: registry: pullMethod: node url: docker://quay.io/containerdisks/centos-stream:8 storage: resources: requests: storage: 30Gi status: {} status: {} 2",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"spec: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2",
"oc get cdiconfig -o yaml",
"oc get cdiconfig -o jsonpath='{.items..status.filesystemOverhead}'",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: any_name path: \"/var/myvolumes\" 2 workload: nodeSelector: kubernetes.io/os: linux",
"oc create -f hpp_cr.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-csi provisioner: kubevirt.io.hostpath-provisioner reclaimPolicy: Delete 1 volumeBindingMode: WaitForFirstConsumer 2 parameters: storagePool: my-storage-pool 3",
"oc create -f storageclass_csi.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: iso-pvc spec: volumeMode: Block 1 storageClassName: my-storage-class accessModes: - ReadWriteOnce resources: requests: storage: 5Gi",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent storagePools: 1 - name: my-storage-pool path: \"/var/myvolumes\" 2 pvcTemplate: volumeMode: Block 3 storageClassName: my-storage-class 4 accessModes: - ReadWriteOnce resources: requests: storage: 5Gi 5 workload: nodeSelector: kubernetes.io/os: linux",
"oc create -f hpp_pvc_template_pool.yaml",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io",
"oc create -f <datavolume-cloner.yaml> 1",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: resourceRequirements: storageWorkloads: limits: cpu: \"500m\" memory: \"2Gi\" requests: cpu: \"250m\" memory: \"1Gi\"",
"oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: scratchSpaceStorageClass: \"<storage_class>\" 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 registry: url: <image_url> 2 storage: resources: requests: storage: 1Gi preallocation: true",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: datavolume-example annotations: v1.multus-cni.io/default-network: bridge-network 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/virtualization/storage
|
Chapter 16. Setting up distributed tracing
|
Chapter 16. Setting up distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In Streams for Apache Kafka, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. It complements the metrics that are available to view in JMX metrics , as well as the component loggers. Support for tracing is built in to the following Kafka components: Kafka Connect MirrorMaker MirrorMaker 2 Streams for Apache Kafka Bridge Tracing is not supported for Kafka brokers. You add tracing configuration to the properties file of the component. To enable tracing, you set environment variables and add the library of the tracing system to the Kafka classpath. For Jaeger tracing, you can add tracing artifacts for OpenTelemetry with the Jaeger Exporter. Note Streams for Apache Kafka no longer supports OpenTracing. If you were previously using OpenTracing with Jaeger, we encourage you to transition to using OpenTelemetry instead. To enable tracing in Kafka producers, consumers, and Kafka Streams API applications, you instrument application code. When instrumented, clients generate trace data; for example, when producing messages or writing offsets to the log. Note Setting up tracing for applications and systems beyond Streams for Apache Kafka is outside the scope of this content. 16.1. Outline of procedures To set up tracing for Streams for Apache Kafka, follow these procedures in order: Set up tracing for Kafka Connect, MirrorMaker 2, and MirrorMaker: Enable tracing for Kafka Connect Enable tracing for MirrorMaker 2 Enable tracing for MirrorMaker Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument clients with tracers: Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Note For information on enabling tracing for the Kafka Bridge, see Using the Streams for Apache Kafka Bridge . 16.2. Tracing options Use OpenTelemetry with the Jaeger tracing system. OpenTelemetry provides an API specification that is independent from the tracing or monitoring system. You use the APIs to instrument application code for tracing. Instrumented applications generate traces for individual requests across the distributed system. Traces are composed of spans that define specific units of work over time. Jaeger is a tracing system for microservices-based distributed systems. The Jaeger user interface allows you to query, filter, and analyze trace data. The Jaeger user interface showing a simple query Additional resources Jaeger documentation OpenTelemetry documentation 16.3. Environment variables for tracing Use environment variables when you are enabling tracing for Kafka components or initializing a tracer for Kafka clients. Tracing environment variables are subject to change. For the latest information, see the OpenTelemetry documentation . The following tables describe the key environment variables for setting up a tracer. Table 16.1. OpenTelemetry environment variables Property Required Description OTEL_SERVICE_NAME Yes The name of the Jaeger tracing service for OpenTelemetry. OTEL_EXPORTER_JAEGER_ENDPOINT Yes The exporter used for tracing. OTEL_TRACES_EXPORTER Yes The exporter used for tracing. Set to otlp by default. If using Jaeger tracing, you need to set this environment variable as jaeger . If you are using another tracing implementation, specify the exporter used . 16.4. Enabling tracing for Kafka Connect Enable distributed tracing for Kafka Connect using configuration properties. Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. You can enable tracing that uses OpenTelemetry. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer and consumer tracing in the relevant Kafka Connect configuration file. If you are running Kafka Connect in standalone mode, edit the /opt/kafka/config/connect-standalone.properties file. If you are running Kafka Connect in distributed mode, edit the /opt/kafka/config/connect-distributed.properties file. Add the following tracing interceptor properties to the configuration file: Properties for OpenTelemetry producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor With tracing enabled, you initialize tracing when you run the Kafka Connect script. Save the configuration file. Set the environment variables for tracing. Start Kafka Connect in standalone or distributed mode with the configuration file as a parameter (plus any connector properties): Running Kafka Connect in standalone mode su - kafka /opt/kafka/bin/connect-standalone.sh \ /opt/kafka/config/connect-standalone.properties \ connector1.properties \ [connector2.properties ...] Running Kafka Connect in distributed mode su - kafka /opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties The internal consumers and producers of Kafka Connect are now enabled for tracing. 16.5. Enabling tracing for MirrorMaker 2 Enable distributed tracing for MirrorMaker 2 by defining the Interceptor properties in the MirrorMaker 2 properties file. Messages are traced between Kafka clusters. The trace data records messages entering and leaving the MirrorMaker 2 component. You can enable tracing that uses OpenTelemetry. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer and consumer tracing in the opt/kafka/config/connect-mirror-maker.properties file. Add the following tracing interceptor properties to the configuration file: Properties for OpenTelemetry header.converter=org.apache.kafka.connect.converters.ByteArrayConverter producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor ByteArrayConverter prevents Kafka Connect from converting message headers (containing trace IDs) to base64 encoding. This ensures that messages are the same in both the source and the target clusters. With tracing enabled, you initialize tracing when you run the Kafka MirrorMaker 2 script. Save the configuration file. Set the environment variables for tracing. Start MirrorMaker 2 with the producer and consumer configuration files as parameters: su - kafka /opt/kafka/bin/connect-mirror-maker.sh \ /opt/kafka/config/connect-mirror-maker.properties The internal consumers and producers of MirrorMaker 2 are now enabled for tracing. 16.6. Enabling tracing for MirrorMaker Enable distributed tracing for MirrorMaker by passing the Interceptor properties as consumer and producer configuration parameters. Messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker component. You can enable tracing that uses OpenTelemetry. Procedure Add the tracing artifacts to the opt/kafka/libs directory. Configure producer tracing in the /opt/kafka/config/producer.properties file. Add the following tracing interceptor property: Producer property for OpenTelemetry producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor Save the configuration file. Configure consumer tracing in the /opt/kafka/config/consumer.properties file. Add the following tracing interceptor property: Consumer property for OpenTelemetry consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor With tracing enabled, you initialize tracing when you run the Kafka MirrorMaker script. Save the configuration file. Set the environment variables for tracing. Start MirrorMaker with the producer and consumer configuration files as parameters: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh \ --producer.config /opt/kafka/config/producer.properties \ --consumer.config /opt/kafka/config/consumer.properties \ --num.streams=2 The internal consumers and producers of MirrorMaker are now enabled for tracing. 16.7. Initializing tracing for Kafka clients Initialize a tracer for OpenTelemetry, then instrument your client applications for distributed tracing. You can instrument Kafka producer and consumer clients, and Kafka Streams API applications. Configure and initialize a tracer using a set of tracing environment variables . Procedure In each client application add the dependencies for the tracer: Add the Maven dependencies to the pom.xml file for the client application: Dependencies for OpenTelemetry <dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency> Define the configuration of the tracer using the tracing environment variables . Create a tracer, which is initialized with the environment variables: Creating a tracer for OpenTelemetry OpenTelemetry ot = GlobalOpenTelemetry.get(); Register the tracer as a global tracer: GlobalTracer.register(tracer); Instrument your client: Section 16.8, "Instrumenting producers and consumers for tracing" Section 16.9, "Instrumenting Kafka Streams applications for tracing" 16.8. Instrumenting producers and consumers for tracing Instrument application code to enable tracing in Kafka producers and consumers. Use a decorator pattern or interceptors to instrument your Java producer and consumer application code for tracing. You can then record traces when messages are produced or retrieved from a topic. OpenTelemetry instrumentation project provides classes that support instrumentation of producers and consumers. Decorator instrumentation For decorator instrumentation, create a modified producer or consumer instance for tracing. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the consumer or producer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in producer and consumer applications by adding the tracing JARs as dependencies to your project. Procedure Perform these steps in the application code of each producer and consumer application. Instrument your client application code using either a decorator pattern or interceptors. To use a decorator pattern, create a modified producer or consumer instance to send or receive messages. You pass the original KafkaProducer or KafkaConsumer class. Example decorator instrumentation for OpenTelemetry // Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton("mytopic")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use interceptors, set the interceptor class in the producer or consumer configuration. You use the KafkaProducer and KafkaConsumer classes in the usual way. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer configuration using interceptors senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...); Example consumer configuration using interceptors consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList("messages")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); 16.9. Instrumenting Kafka Streams applications for tracing Instrument application code to enable tracing in Kafka Streams API applications. Use a decorator pattern or interceptors to instrument your Kafka Streams API applications for tracing. You can then record traces when messages are produced or retrieved from a topic. Decorator instrumentation For decorator instrumentation, create a modified Kafka Streams instance for tracing. For OpenTelemetry, you need to create a custom TracingKafkaClientSupplier class to provide tracing instrumentation for Kafka Streams. Interceptor instrumentation For interceptor instrumentation, add the tracing capability to the Kafka Streams producer and consumer configuration. Prerequisites You have initialized tracing for the client . You enable instrumentation in Kafka Streams applications by adding the tracing JARs as dependencies to your project. To instrument Kafka Streams with OpenTelemetry, you'll need to write a custom TracingKafkaClientSupplier . The custom TracingKafkaClientSupplier can extend Kafka's DefaultKafkaClientSupplier , overriding the producer and consumer creation methods to wrap the instances with the telemetry-related code. Example custom TracingKafkaClientSupplier private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } } Procedure Perform these steps for each Kafka Streams API application. To use a decorator pattern, create an instance of the TracingKafkaClientSupplier supplier interface, then provide the supplier interface to KafkaStreams . Example decorator instrumentation KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); To use interceptors, set the interceptor class in the Kafka Streams producer and consumer configuration. The TracingProducerInterceptor and TracingConsumerInterceptor interceptor classes take care of the tracing capability. Example producer and consumer configuration using interceptors props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); 16.10. Specifying tracing systems with OpenTelemetry Instead of the default Jaeger system, you can specify other tracing systems that are supported by OpenTelemetry. If you want to use another tracing system with OpenTelemetry, do the following: Add the library of the tracing system to the Kafka classpath. Add the name of the tracing system as an additional exporter environment variable. Additional environment variable when not using Jaeger OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2 1 The name of the tracing system. In this example, Zipkin is specified. 2 The endpoint of the specific selected exporter that listens for spans. In this example, a Zipkin endpoint is specified. Additional resources OpenTelemetry exporter values 16.11. Specifying custom span names for OpenTelemetry A tracing span is a logical unit of work in Jaeger, with an operation name, start time, and duration. Spans have built-in names, but you can specify custom span names in your Kafka client instrumentation where used. Specifying custom span names is optional and only applies when using a decorator pattern in producer and consumer client instrumentation or Kafka Streams instrumentation . Custom span names cannot be specified directly with OpenTelemetry. Instead, you retrieve span names by adding code to your client application to extract additional tags and attributes. Example code to extract attributes //Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("prod_start"), "prod1"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("prod_end"), "prod2"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey("con_start"), "con1"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey("con_end"), "con2"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")); System.setProperty("otel.traces.exporter", "jaeger"); System.setProperty("otel.service.name", "myapp1"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();
|
[
"producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"su - kafka /opt/kafka/bin/connect-standalone.sh /opt/kafka/config/connect-standalone.properties connector1.properties [connector2.properties ...]",
"su - kafka /opt/kafka/bin/connect-distributed.sh /opt/kafka/config/connect-distributed.properties",
"header.converter=org.apache.kafka.connect.converters.ByteArrayConverter producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"su - kafka /opt/kafka/bin/connect-mirror-maker.sh /opt/kafka/config/connect-mirror-maker.properties",
"producer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingProducerInterceptor",
"consumer.interceptor.classes=io.opentelemetry.instrumentation.kafkaclients.TracingConsumerInterceptor",
"su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --producer.config /opt/kafka/config/producer.properties --consumer.config /opt/kafka/config/consumer.properties --num.streams=2",
"<dependency> <groupId>io.opentelemetry.semconv</groupId> <artifactId>opentelemetry-semconv</artifactId> <version>1.21.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-otlp</artifactId> <version>1.34.1</version> <exclusions> <exclusion> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-okhttp</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-grpc-managed-channel</artifactId> <version>1.34.1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk-extension-autoconfigure</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry.instrumentation</groupId> <artifactId>opentelemetry-kafka-clients-2.6</artifactId> <version>1.32.0-alpha</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-sdk</artifactId> <version>1.34.1</version> </dependency> <dependency> <groupId>io.opentelemetry</groupId> <artifactId>opentelemetry-exporter-sender-jdk</artifactId> <version>1.34.1-alpha</version> <scope>runtime</scope> </dependency> <dependency> <groupId>io.grpc</groupId> <artifactId>grpc-netty-shaded</artifactId> <version>1.61.0</version> </dependency>",
"OpenTelemetry ot = GlobalOpenTelemetry.get();",
"GlobalTracer.register(tracer);",
"// Producer instance Producer < String, String > op = new KafkaProducer < > ( configs, new StringSerializer(), new StringSerializer() ); Producer < String, String > producer = tracing.wrap(op); KafkaTracing tracing = KafkaTracing.create(GlobalOpenTelemetry.get()); producer.send(...); //consumer instance Consumer<String, String> oc = new KafkaConsumer<>( configs, new StringDeserializer(), new StringDeserializer() ); Consumer<String, String> consumer = tracing.wrap(oc); consumer.subscribe(Collections.singleton(\"mytopic\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); producer.send(...);",
"consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); consumer.subscribe(Collections.singletonList(\"messages\")); ConsumerRecords<Integer, String> records = consumer.poll(1000); ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"private class TracingKafkaClientSupplier extends DefaultKafkaClientSupplier { @Override public Producer<byte[], byte[]> getProducer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getProducer(config)); } @Override public Consumer<byte[], byte[]> getConsumer(Map<String, Object> config) { KafkaTelemetry telemetry = KafkaTelemetry.create(GlobalOpenTelemetry.get()); return telemetry.wrap(super.getConsumer(config)); } @Override public Consumer<byte[], byte[]> getRestoreConsumer(Map<String, Object> config) { return this.getConsumer(config); } @Override public Consumer<byte[], byte[]> getGlobalConsumer(Map<String, Object> config) { return this.getConsumer(config); } }",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"props.put(StreamsConfig.PRODUCER_PREFIX + ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); props.put(StreamsConfig.CONSUMER_PREFIX + ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName());",
"OTEL_SERVICE_NAME=my-tracing-service OTEL_TRACES_EXPORTER=zipkin 1 OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://localhost:9411/api/v2/spans 2",
"//Defines attribute extraction for a producer private static class ProducerAttribExtractor implements AttributesExtractor < ProducerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"prod_start\"), \"prod1\"); } @Override public void onEnd(AttributesBuilder attributes, ProducerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"prod_end\"), \"prod2\"); } } //Defines attribute extraction for a consumer private static class ConsumerAttribExtractor implements AttributesExtractor < ConsumerRecord < ? , ? > , Void > { @Override public void onStart(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord) { set(attributes, AttributeKey.stringKey(\"con_start\"), \"con1\"); } @Override public void onEnd(AttributesBuilder attributes, ConsumerRecord < ? , ? > producerRecord, @Nullable Void unused, @Nullable Throwable error) { set(attributes, AttributeKey.stringKey(\"con_end\"), \"con2\"); } } //Extracts the attributes public static void main(String[] args) throws Exception { Map < String, Object > configs = new HashMap < > (Collections.singletonMap(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, \"localhost:9092\")); System.setProperty(\"otel.traces.exporter\", \"jaeger\"); System.setProperty(\"otel.service.name\", \"myapp1\"); KafkaTracing tracing = KafkaTracing.newBuilder(GlobalOpenTelemetry.get()) .addProducerAttributesExtractors(new ProducerAttribExtractor()) .addConsumerAttributesExtractors(new ConsumerAttribExtractor()) .build();"
] |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-distributed-tracing-str
|
function::tcpmib_local_addr
|
function::tcpmib_local_addr Name function::tcpmib_local_addr - Get the source address Synopsis Arguments sk pointer to a struct inet_sock Description Returns the saddr from a struct inet_sock in host order.
|
[
"tcpmib_local_addr:long(sk:long)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcpmib-local-addr
|
Chapter 4. Installing with the Assisted Installer API
|
Chapter 4. Installing with the Assisted Installer API After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster by using the Assisted Installer API. To use the API, you must perform the following procedures: Set up the API authentication. Configure the pull secret. Register a new cluster definition. Create an infrastructure environment for the cluster. Once you perform these steps, you can modify the cluster definition, create discovery ISOs, add hosts to the cluster, and install the cluster. This document does not cover every endpoint of the Assisted Installer API , but you can review all of the endpoints in the API viewer or the swagger.yaml file. 4.1. Generating the offline token Download the offline token from the Assisted Installer web console. You will use the offline token to set the API token. Prerequisites Install jq . Log in to the OpenShift Cluster Manager as a user with cluster creation privileges. Procedure In the menu, click Downloads . In the Tokens section under OpenShift Cluster Manager API Token , click View API Token . Click Load Token . Important Disable pop-up blockers. In the Your API token section, copy the offline token. In your terminal, set the offline token to the OFFLINE_TOKEN variable: USD export OFFLINE_TOKEN=<copied_token> Tip To make the offline token permanent, add it to your profile. (Optional) Confirm the OFFLINE_TOKEN variable definition. USD echo USD{OFFLINE_TOKEN} 4.2. Authenticating with the REST API API calls require authentication with the API token. Assuming you use API_TOKEN as a variable name, add -H "Authorization: Bearer USD{API_TOKEN}" to API calls to authenticate with the REST API. Note The API token expires after 15 minutes. Prerequisites You have generated the OFFLINE_TOKEN variable. Procedure On the command line terminal, set the API_TOKEN variable using the OFFLINE_TOKEN to validate the user. USD export API_TOKEN=USD( \ curl \ --silent \ --header "Accept: application/json" \ --header "Content-Type: application/x-www-form-urlencoded" \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=USD{OFFLINE_TOKEN}" \ "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \ | jq --raw-output ".access_token" \ ) Confirm the API_TOKEN variable definition: USD echo USD{API_TOKEN} Create a script in your path for one of the token generating methods. For example: USD vim ~/.local/bin/refresh-token export API_TOKEN=USD( \ curl \ --silent \ --header "Accept: application/json" \ --header "Content-Type: application/x-www-form-urlencoded" \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=USD{OFFLINE_TOKEN}" \ "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \ | jq --raw-output ".access_token" \ ) Then, save the file. Change the file mode to make it executable: USD chmod +x ~/.local/bin/refresh-token Refresh the API token: USD source refresh-token Verify that you can access the API by running the following command: USD curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer USD{API_TOKEN}" | jq Example output { "release_tag": "v2.11.3", "versions": { "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-211", "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-266", "assisted-installer-service": "quay.io/app-sre/assisted-service:78d113a", "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-195" } } 4.3. Configuring the pull secret Many of the Assisted Installer API calls require the pull secret. Download the pull secret to a file so that you can reference it in API calls. The pull secret is a JSON object that will be included as a value within the request's JSON object. The pull secret JSON must be formatted to escape the quotes. For example: Before {"auths":{"cloud.openshift.com": ... After {\"auths\":{\"cloud.openshift.com\": ... Procedure In the menu, click OpenShift . In the submenu, click Downloads . In the Tokens section under Pull secret , click Download . To use the pull secret from a shell variable, execute the following command: USD export PULL_SECRET=USD(cat ~/Downloads/pull-secret.txt | jq -R .) To slurp the pull secret file using jq , reference it in the pull_secret variable, piping the value to tojson to ensure that it is properly formatted as escaped JSON. For example: USD curl https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' 1 { "name": "testcluster", "control_plane_count": "3", "openshift_version": "4.11", "pull_secret": USDpull_secret[0] | tojson, 2 "base_dns_domain": "example.com" } ')" 1 Slurp the pull secret file. 2 Format the pull secret to escaped JSON format. Confirm the PULL_SECRET variable definition: USD echo USD{PULL_SECRET} 4.4. Generating the SSH public key During the installation of OpenShift Container Platform, you can optionally provide an SSH public key to the installation program. This is useful for initiating an SSH connection to a remote node when troubeshooting an installation error. If you do not have an existing SSH key pair on your local machine to use for the authentication, create one now. Prerequisites Generate the OFFLINE_TOKEN and API_TOKEN variables. Procedure From the root user in your terminal, get the SSH public key: USD cat /root/.ssh/id_rsa.pub Set the SSH public key to the CLUSTER_SSHKEY variable: USD CLUSTER_SSHKEY=<downloaded_ssh_key> Confirm the CLUSTER_SSHKEY variable definition: USD echo USD{CLUSTER_SSHKEY} 4.5. Registering a new cluster To register a new cluster definition with the API, use the /v2/clusters endpoint. The following parameters are mandatory: name openshift-version pull_secret cpu_architecture See the cluster-create-params model in the API viewer for details on the fields you can set when registering a new cluster. When setting the olm_operators field, see Additional Resources for details on installing Operators. Prerequisites You have generated a valid API_TOKEN . Tokens expire every 15 minutes. You have downloaded the pull secret. Optional: You have assigned the pull secret to the USDPULL_SECRET variable. Procedure Refresh the API token: USD source refresh-token Register a new cluster by using one of the following methods: Register the cluster by referencing the pull secret file in the request: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' \ { \ "name": "testcluster", \ "openshift_version": "4.16", \ 1 "control_plane_count": "<number>", \ 2 "cpu_architecture" : "<architecture_name>", \ 3 "base_dns_domain": "example.com", \ "pull_secret": USDpull_secret[0] | tojson \ } \ ')" | jq '.id' Register the cluster by doing the following: Writing the configuration to a JSON file: USD cat << EOF > cluster.json { "name": "testcluster", "openshift_version": "4.16", 1 "control_plane_count": "<number>", 2 "base_dns_domain": "example.com", "network_type": "examplenetwork", "cluster_network_cidr":"11.111.1.0/14" "cluster_network_host_prefix": 11, "service_network_cidr": "111.11.1.0/16", "api_vips":[{"ip": ""}], "ingress_vips": [{"ip": ""}], "vip_dhcp_allocation": false, "additional_ntp_source": "clock.redhat.com,clock2.redhat.com", "ssh_public_key": "USDCLUSTER_SSHKEY", "pull_secret": USDPULL_SECRET } EOF Referencing it in the request: USD curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters" \ -d @./cluster.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.id' 1 1 Pay attention to the following: To install the latest OpenShift version, use the x.y format, such as 4.16 for version 4.16.10. To install a specific OpenShift version, use the x.y.z format, such as 4.16.3 for version 4.16.3. To install a mixed-architecture cluster, add the -multi extension, such as 4.16-multi for the latest version or 4.16.3-multi for a specific version. If you are booting from an iSCSI drive, enter OpenShift Container Platform version 4.15 or later. 2 2 Set the number of control plane nodes to 1 for a single-node OpenShift cluster, or to 3 , 4 , or 5 for a multi-node OpenShift Container Platform cluster. The system supports 4 , or 5 control plane nodes from OpenShift Container Platform 4.18 and later, on a bare metal or user-managed networking platform with an x86_64 CPU architecture. For details, see About specifying the number of control plane nodes . 3 Valid values are x86_64 , arm64 , ppc64le , s390x , or multi . Specify multi for a mixed-architecture cluster. Assign the returned cluster_id to the CLUSTER_ID variable and export it: USD export CLUSTER_ID=<cluster_id> Note If you close your terminal session, you need to export the CLUSTER_ID variable again in a new terminal session. Check the status of the new cluster: USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq Once you register a new cluster definition, create the infrastructure environment for the cluster. Note You cannot see the cluster configuration settings in the Assisted Installer user interface until you create the infrastructure environment. Additional resources Modifying a cluster Installing a mixed-architecture cluster Optional: Installing on Nutanix Optional: Installing on vSphere Optional: Installing on Oracle Cloud Infrastructure 4.5.1. Installing Operators You can install the following Operators when you register a new cluster: OpenShift Virtualization Operator Note Currently, OpenShift Virtualization is not supported on IBM Z(R) and IBM Power(R). The OpenShift Virtualization Operator requires backend storage, and automatically activates Local Storage Operator (LSO) by default in the background. Selecting an alternative storage manager, such as LVM Storage,overrides the default Local Storage Operator. Migration Toolkit for Virtualization Operator Note Specifying the Migration Toolkit for Virtualization (MTV) Operator automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator. Multicluster engine Operator Note Deploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations: Multi-node cluster: No storage is configured. You must configure storage after the installation. Single-node OpenShift: LVM Storage is installed. OpenShift Data Foundation Operator LVM Storage Operator OpenShift AI Operator Important The integration of the OpenShift AI Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. If you require advanced options, install the Operators after you have installed the cluster. This step is optional. Prerequisites You have reviewed Customizing your installation using Operators for an overview of each operator, together with its prerequisites and dependencies. Procedure Run the following command: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.15", "cpu_architecture" : "x86_64", "base_dns_domain": "example.com", "olm_operators": [ { "name": "mce" } 1 , { "name": "odf" } ] "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' 1 Specify cnv for OpenShift Virtualization, mtv for Migration Toolkit for Virtualization, mce for multicluster engine, odf for OpenShift Data Foundation, lvm for LVM Storage or openshift-ai for OpenShift AI. Selecting an Operator automatically activates any dependent Operators. 4.5.2. Scheduling workloads to run on control plane nodes Use the schedulable_masters attribute to enable workloads to run on control plane nodes. Prerequisites You have generated a valid API_TOKEN . Tokens expire every 15 minutes. You have created a USDPULL_SECRET variable. You are installing OpenShift Container Platform 4.14 or later. Procedure Follow the instructions for installing Assisted Installer using the Assisted Installer API. When you reach the step for registering a new cluster, set the schedulable_masters attribute as follows: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "schedulable_masters": true 1 } ' | jq 1 Enables the scheduling of workloads on the control plane nodes. 4.6. Modifying a cluster To modify a cluster definition with the API, use the /v2/clusters/{cluster_id} endpoint. Modifying a cluster resource is a common operation for adding settings such as changing the network type or enabling user-managed networking. See the v2-cluster-update-params model in the API viewer for details on the fields you can set when modifying a cluster definition. You can add or remove Operators from a cluster resource that has already been registered. Note To create partitions on nodes, see Configuring storage on nodes in the OpenShift Container Platform documentation. Prerequisites You have created a new cluster resource. Procedure Refresh the API token: USD source refresh-token Modify the cluster. For example, change the SSH key: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZrD4LMkAEeoU2vShhF8VM+cCZtVRgB7tqtsMxms2q3TOJZAgfuqReKYWm+OLOZTD+DO3Hn1pah/mU3u7uJfTUg4wEX0Le8zBu9xJVym0BVmSFkzHfIJVTn6SfZ81NqcalisGWkpmkKXVCdnVAX6RsbHfpGKk9YPQarmRCn5KzkelJK4hrSWpBPjdzkFXaIpf64JBZtew9XVYA3QeXkIcFuq7NBuUH9BonroPEmIXNOa41PUP1IWq3mERNgzHZiuU8Ks/pFuU5HCMvv4qbTOIhiig7vidImHPpqYT/TCkuVi5w0ZZgkkBeLnxWxH0ldrfzgFBYAxnpTU8Ih/4VhG538Ix1hxPaM6cXds2ic71mBbtbSrk+zjtNPaeYk1O7UpcCw4jjHspU/rVV/DY51D5gSiiuaFPBMucnYPgUxy4FMBFfGrmGLIzTKiLzcz0DiSz1jBeTQOX++1nz+KDLBD8CPdi5k4dq7lLkapRk85qdEvgaG5RlHMSPSS3wDrQ51fD8= user@hostname" } ' | jq 4.6.1. Modifying Operators by using the API You can add or remove Operators from a cluster resource that has already been registered as part of a installation. This is only possible before you start the OpenShift Container Platform installation. You set the required Operator definition by using the PATCH method for the /v2/clusters/{cluster_id} endpoint. Prerequisites You have refreshed the API token. You have exported the CLUSTER_ID as an environment variable. Procedure Run the following command to modify the Operators: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "olm_operators": [{"name": "mce"}, {"name": "cnv"}], 1 } ' | jq '.id' 1 Specify cnv for OpenShift Virtualization, mtv for Migration Toolkit for Virtualization, mce for multicluster engine, odf for Red Hat OpenShift Data Foundation, lvm for Logical Volume Manager Storage, or openshift-ai for OpenShift AI. To remove a previously installed Operator, exclude it from the list of values. To remove all previously installed Operators, specify an empty array: "olm_operators": [] . Example output { <various cluster properties>, "monitored_operators": [ { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "console", "operator_type": "builtin", "status_updated_at": "0001-01-01T00:00:00.000Z", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "cvo", "operator_type": "builtin", "status_updated_at": "0001-01-01T00:00:00.000Z", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "mce", "namespace": "multicluster-engine", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "multicluster-engine", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "cnv", "namespace": "openshift-cnv", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "hco-operatorhub", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "lvm", "namespace": "openshift-local-storage", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "local-storage-operator", "timeout_seconds": 4200 } ], <more cluster properties> Note The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types: "operator_type": "builtin" : Operators of this type are an integral part of OpenShift Container Platform. "operator_type": "olm" : Operators of this type are added manually by a user or automatically, as a dependency. In this example, the LVM Storage Operator is added automatically as a dependency of OpenShift Virtualization. Additional resources See Customizing your installation using Operators for an overview of each operator, together with its prerequisites and dependencies. 4.7. Registering a new infrastructure environment Once you register a new cluster definition with the Assisted Installer API, create an infrastructure environment using the v2/infra-envs endpoint. Registering a new infrastructure environment requires the following settings: name pull_secret cpu_architecture See the infra-env-create-params model in the API viewer for details on the fields you can set when registering a new infrastructure environment. You can modify an infrastructure environment after you create it. As a best practice, consider including the cluster_id when creating a new infrastructure environment. The cluster_id will associate the infrastructure environment with a cluster definition. When creating the new infrastructure environment, the Assisted Installer will also generate a discovery ISO. Prerequisites You have generated a valid API_TOKEN . Tokens expire every 15 minutes. You have downloaded the pull secret. Optional: You have registered a new cluster definition and exported the cluster_id . Procedure Refresh the API token: USD source refresh-token Register a new infrastructure environment. Provide a name, preferably something including the cluster name. This example provides the cluster ID to associate the infrastructure environment with the cluster resource. The following example specifies the image_type . You can specify either full-iso or minimal-iso . The default value is minimal-iso . Optional: You can register a new infrastructure environment by slurping the pull secret file in the request: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt \ --arg cluster_id USD{CLUSTER_ID} ' { "name": "testcluster-infra-env", "image_type":"full-iso", "cluster_id": USDcluster_id, "cpu_architecture" : "<architecture_name>", 1 "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Note 1 Valid values are x86_64 , arm64 , ppc64le , s390x , and multi . Optional: You can register a new infrastructure environment by writing the configuration to a JSON file and then referencing it in the request: USD cat << EOF > infra-envs.json { "name": "testcluster", "pull_secret": USDPULL_SECRET, "proxy": { "http_proxy": "", "https_proxy": "", "no_proxy": "" }, "ssh_authorized_key": "USDCLUSTER_SSHKEY", "image_type": "full-iso", "cluster_id": "USD{CLUSTER_ID}", "openshift_version": "4.11" } EOF USD curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/infra-envs" -d @./infra-envs.json -H "Content-Type: application/json" -H "Authorization: Bearer USDAPI_TOKEN" | jq '.id' Assign the returned id to the INFRA_ENV_ID variable and export it: USD export INFRA_ENV_ID=<id> Note Once you create an infrastructure environment and associate it to a cluster definition via the cluster_id , you can see the cluster settings in the Assisted Installer web user interface. If you close your terminal session, you need to re-export the id in a new terminal session. 4.8. Modifying an infrastructure environment You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. Modifying an infrastructure environment is a common operation for adding settings such as networking, SSH keys, or ignition configuration overrides. See the infra-env-update-params model in the API viewer for details on the fields you can set when modifying an infrastructure environment. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO. Prerequisites You have created a new infrastructure environment. Procedure Refresh the API token: USD source refresh-token Modify the infrastructure environment: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "image_type":"minimal-iso", "pull_secret": USDpull_secret[0] | tojson } ')" | jq 4.8.1. Adding kernel arguments Providing kernel arguments to the Red Hat Enterprise Linux CoreOS (RHCOS) kernel via the Assisted Installer means passing specific parameters or options to the kernel at boot time, particularly when you cannot customize the kernel parameters of the discovery ISO. Kernel parameters can control various aspects of the kernel's behavior and the operating system's configuration, affecting hardware interaction, system performance, and functionality. Kernel arguments are used to customize or inform the node's RHCOS kernel about the hardware configuration, debugging preferences, system services, and other low-level settings. The RHCOS installer kargs modify command supports the append , delete , and replace options. You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO. Procedure Refresh the API token: USD source refresh-token Modify the kernel arguments: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "kernel_arguments": [{ "operation": "append", "value": "<karg>=<value>" }], 1 "image_type":"minimal-iso", "pull_secret": USDpull_secret[0] | tojson } ')" | jq 1 Replace <karg> with the the kernel argument and <value> with the kernal argument value. For example: rd.net.timeout.carrier=60 . You can specify multiple kernel arguments by adding a JSON object for each kernel argument. 4.9. Adding hosts After configuring the cluster resource and infrastructure environment, download the discovery ISO image. You can choose from two images: Full ISO image: Use the full ISO image when booting must be self-contained. The image includes everything needed to boot and start the Assisted Installer agent. The ISO image is about 1GB in size. This is the recommended method for the s390x architecture when installing with RHEL KVM. Minimal ISO image: Use the minimal ISO image when the virtual media connection has limited bandwidth. This is the default setting. The image includes only what the agent requires to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. This option is mandatory in the following scenarios: If you are installing OpenShift Container Platform on Oracle Cloud Infrastructure. If you are installing OpenShift Container Platform on iSCSI boot volumes. Note Currently, ISO images are supported on IBM Z(R) ( s390x ) with KVM, iPXE with z/VM, and LPAR (both static and DPM). For details, see Booting hosts using iPXE . You can boot hosts with the discovery image using three methods. For details, see Booting hosts with the discovery image . Prerequisites You have created a cluster. You have created an infrastructure environment. You have completed the configuration. If the cluster hosts are behind a firewall that requires the use of a proxy, you have configured the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Note The proxy username and password must be URL-encoded. You have selected an image type or will use the default minimal-iso . Procedure Configure the discovery image if needed. For details, see Configuring the discovery image . Refresh the API token: USD source refresh-token Get the download URL: USD curl -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url Example output { "expires_at": "2024-02-07T20:20:23.000Z", "url": "https://api.openshift.com/api/assisted-images/bytoken/<TOKEN>/<OCP_VERSION>/<CPU_ARCHITECTURE>/<FULL_OR_MINIMAL_IMAGE>.iso" } Download the discovery image: USD wget -O discovery.iso <url> Replace <url> with the download URL from the step. Boot the host(s) with the discovery image. Assign a role to host(s). Additional resources Configuring the discovery image Booting hosts with the discovery image Adding hosts on Nutanix with the API Adding hosts on vSphere Assigning roles to hosts Booting hosts using iPXE 4.10. Modifying hosts After adding hosts, modify the hosts as needed. The most common modifications are to the host_name and the host_role parameters. You can modify a host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. See the host-update-params model in the API viewer for details on the fields you can set when modifying a host. A host might be one of two roles: master : A host with the master role will operate as a control plane host. worker : A host with the worker role will operate as a worker host. By default, the Assisted Installer sets a host to auto-assign , which means the installation program determines whether the host is a master or worker role automatically. Use the following procedure to set the host's role: Prerequisites You have added hosts to the cluster. Procedure Refresh the API token: USD source refresh-token Get the host IDs: USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Modify the host: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" "host_name" : "worker-1" } ' | jq 1 Replace <host_id> with the ID of the host. 4.10.1. Modifying storage disk configuration Each host retrieved during host discovery can have multiple storage disks. You can optionally modify the default configurations for each disk. Important Starting from OpenShift Container Platform 4.16, you can install a cluster on a single iSCSI boot device using the Assisted Installer. Although OpenShift Container Platform also supports multipathing for iSCSI, this feature is currently not available for Assisted Installer deployments. Prerequisites Configure the cluster and discover the hosts. For details, see Additional resources . Viewing the storage disks You can view the hosts in your cluster, and the disks on each host. This enables you to perform actions on a specific disk. Procedure Refresh the API token: USD source refresh-token Get the host IDs for the cluster: USD curl -s "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Example output USD "1022623e-7689-8b2d-7fbd-e6f4d5bb28e5" Note This is the ID of a single host. Multiple host IDs are separated by commas. Get the disks for a specific host: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -H "Authorization: Bearer USD{API_TOKEN}" \ | jq '.inventory | fromjson | .disks' 1 Replace <host_id> with the ID of the relevant host. Example output USD [ { "by_id": "/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506", "by_path": "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0", "drive_type": "HDD", "has_uuid": true, "hctl": "1:2:0:0", "id": "/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506", "installation_eligibility": { "eligible": true, "not_eligible_reasons": null }, "model": "PERC_H710P", "name": "sda", "path": "/dev/sda", "serial": "0006a560141adc3a2d00fb8af960f681", "size_bytes": 6595056500736, "vendor": "DELL", "wwn": "0x6c81f660f98afb002d3adc1a1460a506" } ] Note This is the output for one disk. It contains the disk_id and installation_eligibility properties for the disk. Changing the installation disk The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the disk. You can select any disk whose installation_eligibility property is eligible: true to be the installation disk. Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing over Fibre Channel on the installation disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with an /etc/multipath.conf configuration. For details, see Modifying the DM Multipath configuration file . Procedure Get the host and storage disk IDs. For details, see Viewing the storage disks . Optional: Identify the current installation disk: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -H "Authorization: Bearer USD{API_TOKEN}" \ | jq '.installation_disk_id' 1 Replace <host_id> with the ID of the relevant host. Assign a new installation disk: Note Multipath devices are automatically discovered and listed in the host's inventory. To assign a multipath Fibre Channel disk as the installation disk, choose a disk with "drive_type" set to "Multipath" , rather than to "FC" which indicates a single path. USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -X PATCH \ -H "Content-Type: application/json" \ -H "Authorization: Bearer USD{API_TOKEN}" \ { "disks_selected_config": [ { "id": "<disk_id>", 2 "role": "install" } ] } 1 Replace <host_id> with the ID of the host. 2 Replace <disk_id> with the ID of the new installation disk. Disabling disk formatting The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss. You can choose to disable the formatting of a specific disk. This should be performed with caution, as bootable disks may interfere with the installation process, mainly in terms of boot order. You cannot disable formatting for the installation disk. Procedure Get the host and storage disk IDs. For details, see Viewing the storage disks . Run the following command: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -X PATCH \ -H "Content-Type: application/json" \ -H "Authorization: Bearer USD{API_TOKEN}" \ { "disks_skip_formatting": [ { "disk_id": "<disk_id>", 2 "skip_formatting": true 3 } ] } Note 1 Replace <host_id> with the ID of the host. 2 Replace <disk_id> with the ID of the disk. If there is more than one disk, separate the IDs with a comma. 3 To re-enable formatting, change the value to false . 4.11. Adding custom manifests A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party. To create a custom manifest with the API, use the /v2/clusters/USDCLUSTER_ID/manifests endpoint. You can upload a base64-encoded custom manifest to either the openshift folder or the manifests folder with the Assisted Installer API. There is no limit to the number of custom manifests permitted. You can only upload one base64-encoded JSON manifest at a time. However, each uploaded base64-encoded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually. For a file containing a single custom manifest, accepted file extensions include .yaml , .yml , or .json . Single custom manifest example { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "MachineConfig", "metadata": { "labels": { "machineconfiguration.openshift.io/role": "primary" }, "name": "10_primary_storage_config" }, "spec": { "config": { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "</dev/xxyN>", "partitions": [ { "label": "recovery", "startMiB": 32768, "sizeMiB": 16384 } ] } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/recovery", "label": "recovery", "format": "xfs" } ] } } } } For a file containing multiple custom manifests, accepted file types include .yaml or .yml . Multiple custom manifest example apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 --- apiVersion: machineconfiguration.openshift.io/v2 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-openshift-machineconfig-worker-kargs spec: kernelArguments: - loglevel=5 Note When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional. For more information about custom manifests, see Additional Resources . Prerequisites You have generated a valid API_TOKEN . Tokens expire every 15 minutes. You have registered a new cluster definition and exported the cluster_id to the USDCLUSTER_ID BASH variable. Procedure Create a custom manifest file. Save the custom manifest file using the appropriate extension for the file format. Refresh the API token: USD source refresh-token Add the custom manifest to the cluster by executing the following command: USD curl -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/manifests" \ -H "Authorization: Bearer USDAPI_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "file_name":"manifest.json", "folder":"manifests", "content":"'"USD(base64 -w 0 ~/manifest.json)"'" }' | jq Replace manifest.json with the name of your manifest file. The second instance of manifest.json is the path to the file. Ensure the path is correct. Example output { "file_name": "manifest.json", "folder": "manifests" } Note The base64 -w 0 command base64-encodes the manifest as a string and omits carriage returns. Encoding with carriage returns will generate an exception. Verify that the Assisted Installer added the manifest: USD curl -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/manifests/files?folder=manifests&file_name=manifest.json" -H "Authorization: Bearer USDAPI_TOKEN" Replace manifest.json with the name of your manifest file. Additional resources Manifest configuration files Multi-document YAML files 4.12. Preinstallation validations The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation. Additional resources Preinstallation validations 4.13. Installing the cluster Once the cluster hosts past validation, you can install the cluster. Prerequisites You have created a cluster and infrastructure environment. You have added hosts to the infrastructure environment. The hosts have passed validation. Procedure Refresh the API token: USD source refresh-token Install the cluster: USD curl -H "Authorization: Bearer USDAPI_TOKEN" \ -X POST \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/actions/install | jq Complete any postinstallation platform integration steps. Additional resources Nutanix postinstallation configuration vSphere postinstallation configuration
|
[
"export OFFLINE_TOKEN=<copied_token>",
"echo USD{OFFLINE_TOKEN}",
"export API_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )",
"echo USD{API_TOKEN}",
"vim ~/.local/bin/refresh-token",
"export API_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )",
"chmod +x ~/.local/bin/refresh-token",
"source refresh-token",
"curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{API_TOKEN}\" | jq",
"{ \"release_tag\": \"v2.11.3\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-211\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-266\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:78d113a\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-195\" } }",
"{\"auths\":{\"cloud.openshift.com\":",
"{\\\"auths\\\":{\\\"cloud.openshift.com\\\":",
"export PULL_SECRET=USD(cat ~/Downloads/pull-secret.txt | jq -R .)",
"curl https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' 1 { \"name\": \"testcluster\", \"control_plane_count\": \"3\", \"openshift_version\": \"4.11\", \"pull_secret\": USDpull_secret[0] | tojson, 2 \"base_dns_domain\": \"example.com\" } ')\"",
"echo USD{PULL_SECRET}",
"cat /root/.ssh/id_rsa.pub",
"CLUSTER_SSHKEY=<downloaded_ssh_key>",
"echo USD{CLUSTER_SSHKEY}",
"source refresh-token",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.16\", \\ 1 \"control_plane_count\": \"<number>\", \\ 2 \"cpu_architecture\" : \"<architecture_name>\", \\ 3 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"cat << EOF > cluster.json { \"name\": \"testcluster\", \"openshift_version\": \"4.16\", 1 \"control_plane_count\": \"<number>\", 2 \"base_dns_domain\": \"example.com\", \"network_type\": \"examplenetwork\", \"cluster_network_cidr\":\"11.111.1.0/14\" \"cluster_network_host_prefix\": 11, \"service_network_cidr\": \"111.11.1.0/16\", \"api_vips\":[{\"ip\": \"\"}], \"ingress_vips\": [{\"ip\": \"\"}], \"vip_dhcp_allocation\": false, \"additional_ntp_source\": \"clock.redhat.com,clock2.redhat.com\", \"ssh_public_key\": \"USDCLUSTER_SSHKEY\", \"pull_secret\": USDPULL_SECRET } EOF",
"curl -s -X POST \"https://api.openshift.com/api/assisted-install/v2/clusters\" -d @./cluster.json -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.id'",
"export CLUSTER_ID=<cluster_id>",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq",
"curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.15\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators\": [ { \"name\": \"mce\" } 1 , { \"name\": \"odf\" } ] \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"schedulable_masters\": true 1 } ' | jq",
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"ssh_public_key\": \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZrD4LMkAEeoU2vShhF8VM+cCZtVRgB7tqtsMxms2q3TOJZAgfuqReKYWm+OLOZTD+DO3Hn1pah/mU3u7uJfTUg4wEX0Le8zBu9xJVym0BVmSFkzHfIJVTn6SfZ81NqcalisGWkpmkKXVCdnVAX6RsbHfpGKk9YPQarmRCn5KzkelJK4hrSWpBPjdzkFXaIpf64JBZtew9XVYA3QeXkIcFuq7NBuUH9BonroPEmIXNOa41PUP1IWq3mERNgzHZiuU8Ks/pFuU5HCMvv4qbTOIhiig7vidImHPpqYT/TCkuVi5w0ZZgkkBeLnxWxH0ldrfzgFBYAxnpTU8Ih/4VhG538Ix1hxPaM6cXds2ic71mBbtbSrk+zjtNPaeYk1O7UpcCw4jjHspU/rVV/DY51D5gSiiuaFPBMucnYPgUxy4FMBFfGrmGLIzTKiLzcz0DiSz1jBeTQOX++1nz+KDLBD8CPdi5k4dq7lLkapRk85qdEvgaG5RlHMSPSS3wDrQ51fD8= user@hostname\" } ' | jq",
"curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"olm_operators\": [{\"name\": \"mce\"}, {\"name\": \"cnv\"}], 1 } ' | jq '.id'",
"{ <various cluster properties>, \"monitored_operators\": [ { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"console\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cvo\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"mce\", \"namespace\": \"multicluster-engine\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"multicluster-engine\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cnv\", \"namespace\": \"openshift-cnv\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"hco-operatorhub\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"lvm\", \"namespace\": \"openshift-local-storage\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"local-storage-operator\", \"timeout_seconds\": 4200 } ], <more cluster properties>",
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"<architecture_name>\", 1 \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'",
"cat << EOF > infra-envs.json { \"name\": \"testcluster\", \"pull_secret\": USDPULL_SECRET, \"proxy\": { \"http_proxy\": \"\", \"https_proxy\": \"\", \"no_proxy\": \"\" }, \"ssh_authorized_key\": \"USDCLUSTER_SSHKEY\", \"image_type\": \"full-iso\", \"cluster_id\": \"USD{CLUSTER_ID}\", \"openshift_version\": \"4.11\" } EOF",
"curl -s -X POST \"https://api.openshift.com/api/assisted-install/v2/infra-envs\" -d @./infra-envs.json -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.id'",
"export INFRA_ENV_ID=<id>",
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"image_type\":\"minimal-iso\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq",
"source refresh-token",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"kernel_arguments\": [{ \"operation\": \"append\", \"value\": \"<karg>=<value>\" }], 1 \"image_type\":\"minimal-iso\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq",
"source refresh-token",
"curl -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url",
"{ \"expires_at\": \"2024-02-07T20:20:23.000Z\", \"url\": \"https://api.openshift.com/api/assisted-images/bytoken/<TOKEN>/<OCP_VERSION>/<CPU_ARCHITECTURE>/<FULL_OR_MINIMAL_IMAGE>.iso\" }",
"wget -O discovery.iso <url>",
"source refresh-token",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" \"host_name\" : \"worker-1\" } ' | jq",
"source refresh-token",
"curl -s \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'",
"\"1022623e-7689-8b2d-7fbd-e6f4d5bb28e5\"",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '.inventory | fromjson | .disks'",
"[ { \"by_id\": \"/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506\", \"by_path\": \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0\", \"drive_type\": \"HDD\", \"has_uuid\": true, \"hctl\": \"1:2:0:0\", \"id\": \"/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506\", \"installation_eligibility\": { \"eligible\": true, \"not_eligible_reasons\": null }, \"model\": \"PERC_H710P\", \"name\": \"sda\", \"path\": \"/dev/sda\", \"serial\": \"0006a560141adc3a2d00fb8af960f681\", \"size_bytes\": 6595056500736, \"vendor\": \"DELL\", \"wwn\": \"0x6c81f660f98afb002d3adc1a1460a506\" } ]",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '.installation_disk_id'",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -X PATCH -H \"Content-Type: application/json\" -H \"Authorization: Bearer USD{API_TOKEN}\" { \"disks_selected_config\": [ { \"id\": \"<disk_id>\", 2 \"role\": \"install\" } ] }",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -X PATCH -H \"Content-Type: application/json\" -H \"Authorization: Bearer USD{API_TOKEN}\" { \"disks_skip_formatting\": [ { \"disk_id\": \"<disk_id>\", 2 \"skip_formatting\": true 3 } ] }",
"{ \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"MachineConfig\", \"metadata\": { \"labels\": { \"machineconfiguration.openshift.io/role\": \"primary\" }, \"name\": \"10_primary_storage_config\" }, \"spec\": { \"config\": { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"</dev/xxyN>\", \"partitions\": [ { \"label\": \"recovery\", \"startMiB\": 32768, \"sizeMiB\": 16384 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/recovery\", \"label\": \"recovery\", \"format\": \"xfs\" } ] } } } }",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 --- apiVersion: machineconfiguration.openshift.io/v2 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-openshift-machineconfig-worker-kargs spec: kernelArguments: - loglevel=5",
"source refresh-token",
"curl -X POST \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/manifests\" -H \"Authorization: Bearer USDAPI_TOKEN\" -H \"Content-Type: application/json\" -d '{ \"file_name\":\"manifest.json\", \"folder\":\"manifests\", \"content\":\"'\"USD(base64 -w 0 ~/manifest.json)\"'\" }' | jq",
"{ \"file_name\": \"manifest.json\", \"folder\": \"manifests\" }",
"curl -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/manifests/files?folder=manifests&file_name=manifest.json\" -H \"Authorization: Bearer USDAPI_TOKEN\"",
"source refresh-token",
"curl -H \"Authorization: Bearer USDAPI_TOKEN\" -X POST https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/actions/install | jq"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/installing-with-api
|
Chapter 5. Control plane architecture
|
Chapter 5. Control plane architecture The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. The cluster itself manages all upgrades to the machines by the actions of the Cluster Version Operator (CVO), the Machine Config Operator, and a set of individual Operators. 5.1. Node configuration management with machine config pools Machines that run control plane components or user workloads are divided into groups based on the types of resources they handle. These groups of machines are called machine config pools (MCP). Each MCP manages a set of nodes and its corresponding machine configs. The role of the node determines which MCP it belongs to; the MCP governs nodes based on its assigned node role label. Nodes in an MCP have the same configuration; this means nodes can be scaled up and torn down in response to increased or decreased workloads. By default, there are two MCPs created by the cluster when it is installed: master and worker . Each default MCP has a defined configuration applied by the Machine Config Operator (MCO), which is responsible for managing MCPs and facilitating MCP upgrades. You can create additional MCPs, or custom pools, to manage nodes that have custom use cases that extend outside of the default node types. Custom pools are pools that inherit their configurations from the worker pool. They use any machine config targeted for the worker pool, but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits its configuration from the worker pool, any change to the worker pool is applied to the custom pool as well. Custom pools that do not inherit their configurations from the worker pool are not supported by the MCO. Note A node can only be included in one MCP. If a node has multiple labels that correspond to several MCPs, like worker,infra , it is managed by the infra custom pool, not the worker pool. Custom pools take priority on selecting nodes to manage based on node labels; nodes that do not belong to a custom pool are managed by the worker pool. It is recommended to have a custom pool for every node role you want to manage in your cluster. For example, if you create infra nodes to handle infra workloads, it is recommended to create a custom infra MCP to group those nodes together. If you apply an infra role label to a worker node so it has the worker,infra dual label, but do not have a custom infra MCP, the MCO considers it a worker node. If you remove the worker label from a node and apply the infra label without grouping it in a custom pool, the node is not recognized by the MCO and is unmanaged by the cluster. Important Any node labeled with the infra role that is only running infra workloads is not counted toward the total number of subscriptions. The MCP managing an infra node is mutually exclusive from how the cluster determines subscription charges; tagging a node with the appropriate infra role and using taints to prevent user workloads from being scheduled on that node are the only requirements for avoiding subscription charges for infra workloads. The MCO applies updates for pools independently; for example, if there is an update that affects all pools, nodes from each pool update in parallel with each other. If you add a custom pool, nodes from that pool also attempt to update concurrently with the master and worker nodes. There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift . The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. Additional resources Understanding configuration drift detection . 5.2. Machine roles in OpenShift Container Platform OpenShift Container Platform assigns hosts different roles. These roles define the function of the machine within the cluster. The cluster contains definitions for the standard master and worker role types. Note The cluster also contains the definition for the bootstrap role. Because the bootstrap machine is used only during cluster installation, its function is explained in the cluster installation documentation. 5.2.1. Control plane and node host compatibility The OpenShift Container Platform version must match between control plane host and node host. For example, in a 4.11 cluster, all control plane hosts must be 4.11 and all nodes must be 4.11. Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from OpenShift Container Platform 4.10 to 4.11, some nodes will upgrade to 4.11 before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible. The kubelet service must not be newer than kube-apiserver , and can be up to two minor versions older depending on whether your OpenShift Container Platform version is odd or even. The table below shows the appropriate version compatibility: OpenShift Container Platform version Supported kubelet skew Odd OpenShift Container Platform minor versions [1] Up to one version older Even OpenShift Container Platform minor versions [2] Up to two versions older For example, OpenShift Container Platform 4.5, 4.7, 4.9, 4.11. For example, OpenShift Container Platform 4.6, 4.8, 4.10. 5.2.2. Cluster workers In a Kubernetes cluster, the worker nodes are where the actual workloads requested by Kubernetes users run and are managed. The worker nodes advertise their capacity and the scheduler, which is part of the master services, determines on which nodes to start containers and pods. Important services run on each worker node, including CRI-O, which is the container engine, Kubelet, which is the service that accepts and fulfills requests for running and stopping container workloads, and a service proxy, which manages communication for pods across workers. In OpenShift Container Platform, machine sets control the worker machines. Machines with the worker role drive compute workloads that are governed by a specific machine pool that autoscales them. Because OpenShift Container Platform has the capacity to support multiple machine types, the worker machines are classed as compute machines. In this release, the terms worker machine and compute machine are used interchangeably because the only default type of compute machine is the worker machine. In future versions of OpenShift Container Platform, different types of compute machines, such as infrastructure machines, might be used by default. Note Machine sets are groupings of machine resources under the machine-api namespace. Machine sets are configurations that are designed to start new machines on a specific cloud provider. Conversely, machine config pools (MCPs) are part of the Machine Config Operator (MCO) namespace. An MCP is used to group machines together so the MCO can manage their configurations and facilitate their upgrades. 5.2.3. Cluster masters In a Kubernetes cluster, the control plane nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane machines are the control plane. They contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster. Because all of the machines with the control plane role are control plane machines, the terms master and control plane are used interchangeably to describe them. Instead of being grouped into a machine set, control plane machines are defined by a series of standalone machine API resources. Extra controls apply to control plane machines to prevent you from deleting all control plane machines and breaking your cluster. Note Exactly three control plane nodes must be used for all production deployments. Services that fall under the Kubernetes category on the master include the Kubernetes API server, etcd, the Kubernetes controller manager, and the Kubernetes scheduler. Table 5.1. Kubernetes services that run on the control plane Component Description Kubernetes API server The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also provides a focal point for the shared state of the cluster. etcd etcd stores the persistent master state while other components watch etcd for changes to bring themselves into the specified state. Kubernetes controller manager The Kubernetes controller manager watches etcd for changes to objects such as replication, namespace, and service account controller objects, and then uses the API to enforce the specified state. Several such processes create a cluster with one active leader at a time. Kubernetes scheduler The Kubernetes scheduler watches for newly created pods without an assigned node and selects the best node to host the pod. There are also OpenShift services that run on the control plane, which include the OpenShift API server, OpenShift controller manager, OpenShift OAuth API server, and OpenShift OAuth server. Table 5.2. OpenShift services that run on the control plane Component Description OpenShift API server The OpenShift API server validates and configures the data for OpenShift resources, such as projects, routes, and templates. The OpenShift API server is managed by the OpenShift API Server Operator. OpenShift controller manager The OpenShift controller manager watches etcd for changes to OpenShift objects, such as project, route, and template controller objects, and then uses the API to enforce the specified state. The OpenShift controller manager is managed by the OpenShift Controller Manager Operator. OpenShift OAuth API server The OpenShift OAuth API server validates and configures the data to authenticate to OpenShift Container Platform, such as users, groups, and OAuth tokens. The OpenShift OAuth API server is managed by the Cluster Authentication Operator. OpenShift OAuth server Users request tokens from the OpenShift OAuth server to authenticate themselves to the API. The OpenShift OAuth server is managed by the Cluster Authentication Operator. Some of these services on the control plane machines run as systemd services, while others run as static pods. Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For control plane machines, those include sshd, which allows remote login. It also includes services such as: The CRI-O container engine (crio), which runs and manages the containers. OpenShift Container Platform 4.11 uses CRI-O instead of the Docker Container Engine. Kubelet (kubelet), which accepts requests for managing containers on the machine from master services. CRI-O and Kubelet must run directly on the host as systemd services because they need to be running before you can run other containers. The installer-* and revision-pruner-* control plane pods must run with root permissions because they write to the /etc/kubernetes directory, which is owned by the root user. These pods are in the following namespaces: openshift-etcd openshift-kube-apiserver openshift-kube-controller-manager openshift-kube-scheduler 5.3. Operators in OpenShift Container Platform Operators are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. Operators also offer a more granular configuration experience. You configure each component by modifying the API that the Operator exposes instead of modifying a global configuration file. Because CRI-O and the Kubelet run on every node, almost every other cluster function can be managed on the control plane by using Operators. Components that are added to the control plane by using Operators include critical networking and credential services. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions. Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications. 5.3.1. Cluster Operators In OpenShift Container Platform, all cluster functions are divided into a series of default cluster Operators . Cluster Operators manage a particular area of cluster functionality, such as cluster-wide application logging, management of the Kubernetes control plane, or the machine provisioning system. Cluster Operators are represented by a ClusterOperator object, which cluster administrators can view in the OpenShift Container Platform web console from the Administration Cluster Settings page. Each cluster Operator provides a simple API for determining cluster functionality. The Operator hides the details of managing the lifecycle of that component. Operators can manage a single component or tens of components, but the end goal is always to reduce operational burden by automating common actions. Additional resources Cluster Operators reference 5.3.2. Add-on Operators Operator Lifecycle Manager (OLM) and OperatorHub are default components in OpenShift Container Platform that help manage Kubernetes-native applications as Operators. Together they provide the system for discovering, installing, and managing the optional add-on Operators available on the cluster. Using OperatorHub in the OpenShift Container Platform web console, cluster administrators and authorized users can select Operators to install from catalogs of Operators. After installing an Operator from OperatorHub, it can be made available globally or in specific namespaces to run in user applications. Default catalog sources are available that include Red Hat Operators, certified Operators, and community Operators. Cluster administrators can also add their own custom catalog sources, which can contain a custom set of Operators. Developers can use the Operator SDK to help author custom Operators that take advantage of OLM features, as well. Their Operator can then be bundled and added to a custom catalog source, which can be added to a cluster and made available to users. Note OLM does not manage the cluster Operators that comprise the OpenShift Container Platform architecture. Additional resources For more details on running add-on Operators in OpenShift Container Platform, see the Operators guide sections on Operator Lifecycle Manager (OLM) and OperatorHub . For more details on the Operator SDK, see Developing Operators . 5.4. About the Machine Config Operator OpenShift Container Platform 4.11 integrates both operating system and cluster management. Because the cluster manages its own updates, including updates to Red Hat Enterprise Linux CoreOS (RHCOS) on cluster nodes, OpenShift Container Platform provides an opinionated lifecycle management experience that simplifies the orchestration of node upgrades. OpenShift Container Platform employs three daemon sets and controllers to simplify node management. These daemon sets orchestrate operating system updates and configuration changes to the hosts by using standard Kubernetes-style constructs. They include: The machine-config-controller , which coordinates machine upgrades from the control plane. It monitors all of the cluster nodes and orchestrates their configuration updates. The machine-config-daemon daemon set, which runs on each node in the cluster and updates a machine to configuration as defined by machine config and as instructed by the MachineConfigController. When the node detects a change, it drains off its pods, applies the update, and reboots. These changes come in the form of Ignition configuration files that apply the specified machine configuration and control kubelet configuration. The update itself is delivered in a container. This process is key to the success of managing OpenShift Container Platform and RHCOS updates together. The machine-config-server daemon set, which provides the Ignition config files to control plane nodes as they join the cluster. The machine configuration is a subset of the Ignition configuration. The machine-config-daemon reads the machine configuration to see if it needs to do an OSTree update or if it must apply a series of systemd kubelet file changes, configuration changes, or other changes to the operating system or OpenShift Container Platform configuration. When you perform node management operations, you create or modify a KubeletConfig custom resource (CR). Important When changes are made to a machine configuration, the Machine Config Operator (MCO) automatically reboots all corresponding nodes in order for the changes to take effect. To prevent the nodes from automatically rebooting after machine configuration changes, before making the changes, you must pause the autoreboot process by setting the spec.paused field to true in the corresponding machine config pool. When paused, machine configuration changes are not applied until you set the spec.paused field to false and the nodes have rebooted into the new configuration. Make sure the pools are unpaused when the CA certificate rotation happens. If the MCPs are paused, the MCO cannot push the newly rotated certificates to those nodes. This causes the cluster to become degraded and causes failure in multiple oc commands, including oc debug , oc logs , oc exec , and oc attach . You receive alerts in the Alerting UI of the OpenShift Container Platform web console if an MCP is paused when the certificates are rotated. The following modifications do not trigger a node reboot: When the MCO detects any of the following changes, it applies the update without draining or rebooting the node: Changes to the SSH key in the spec.config.passwd.users.sshAuthorizedKeys parameter of a machine config. Changes to the global pull secret or pull secret in the openshift-config namespace. Automatic rotation of the /etc/kubernetes/kubelet-ca.crt certificate authority (CA) by the Kubernetes API Server Operator. When the MCO detects changes to the /etc/containers/registries.conf file, such as adding or editing an ImageContentSourcePolicy (ICSP) object, it drains the corresponding nodes, applies the changes, and uncordons the nodes.The node drain does not happen for the following changes: The addition of a registry with the pull-from-mirror = "digest-only" parameter set for each mirror. The addition of a mirror with the pull-from-mirror = "digest-only" parameter set in a registry. The addition of items to the unqualified-search-registries list. There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift . The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. Additional resources For more information about detecting configuration drift, see Understanding configuration drift detection . For information about preventing the control plane machines from rebooting after the Machine Config Operator makes changes to the machine configuration, see Disabling Machine Config Operator from automatically rebooting . 5.5. Overview of hosted control planes (Technology Preview) You can use hosted control planes for Red Hat OpenShift Container Platform to reduce management costs, optimize cluster deployment time, and separate management and workload concerns so that you can focus on your applications. You can enable hosted control planes as a Technology Preview feature by using the multicluster engine for Kubernetes operator version 2.0 or later on Amazon Web Services (AWS). Important Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.5.1. Architecture of hosted control planes OpenShift Container Platform is often deployed in a coupled, or standalone, model, where a cluster consists of a control plane and a data plane. The control plane includes an API endpoint, a storage endpoint, a workload scheduler, and an actuator that ensures state. The data plane includes compute, storage, and networking where workloads and applications run. The standalone control plane is hosted by a dedicated group of nodes, which can be physical or virtual, with a minimum number to ensure quorum. The network stack is shared. Administrator access to a cluster offers visibility into the cluster's control plane, machine management APIs, and other components that contribute to the state of a cluster. Although the standalone model works well, some situations require an architecture where the control plane and data plane are decoupled. In those cases, the data plane is on a separate network domain with a dedicated physical hosting environment. The control plane is hosted by using high-level primitives such as deployments and stateful sets that are native to Kubernetes. The control plane is treated as any other workload. 5.5.2. Benefits of hosted control planes With hosted control planes for OpenShift Container Platform, you can pave the way for a true hybrid-cloud approach and enjoy several other benefits. The security boundaries between management and workloads are stronger because the control plane is decoupled and hosted on a dedicated hosting service cluster. As a result, you are less likely to leak credentials for clusters to other users. Because infrastructure secret account management is also decoupled, cluster infrastructure administrators cannot accidentally delete control plane infrastructure. With hosted control planes, you can run many control planes on fewer nodes. As a result, clusters are more affordable. Because the control planes consist of pods that are launched on OpenShift Container Platform, control planes start quickly. The same principles apply to control planes and workloads, such as monitoring, logging, and auto-scaling. From an infrastructure perspective, you can push registries, HAProxy, cluster monitoring, storage nodes, and other infrastructure components to the tenant's cloud provider account, isolating usage to the tenant. From an operational perspective, multicluster management is more centralized, which results in fewer external factors that affect the cluster status and consistency. Site reliability engineers have a central place to debug issues and navigate to the cluster data plane, which can lead to shorter Time to Resolution (TTR) and greater productivity. Additional resources Hypershift add-on (Technology Preview) Leveraging hosted control plane clusters (Technology Preview)
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/architecture/control-plane
|
4.14. RHEA-2012:0868 - new packages: libwacom
|
4.14. RHEA-2012:0868 - new packages: libwacom New libwacom packages are now available for Red Hat Enterprise Linux 6. The libwacom packages contain a library that provides access to a tablet model database. The libwacom packages expose the contents of this database to applications, allowing for tablet-specific user interfaces. The libwacom packages allow the GNOME tools to automatically configure screen mappings, calibrations, and provide device-specific configurations. This enhancement update adds the libwacom packages to Red Hat Enterprise Linux 6. (BZ# 786100 ) All users who require libwacom should install these new packages.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/rhea-2012-0868
|
Chapter 4. Querying embedded caches
|
Chapter 4. Querying embedded caches Use embedded queries when you add Data Grid as a library to custom applications. Protobuf mapping is not required with embedded queries. Indexing and querying are both done on top of Java objects. 4.1. Querying embedded caches This section explains how to query an embedded cache using an example cache named "books" that stores indexed Book instances. In this example, each Book instance defines which properties are indexed and specifies some advanced indexing options with Hibernate Search annotations as follows: Book.java package org.infinispan.sample; import java.time.LocalDate; import java.util.HashSet; import java.util.Set; import org.infinispan.api.annotations.indexing.*; // Annotate values with @Indexed to add them to indexes // Annotate each field according to how you want to index it @Indexed public class Book { @Keyword String title; @Text String description; @Keyword String isbn; @Basic LocalDate publicationDate; @Embedded Set<Author> authors = new HashSet<Author>(); } Author.java package org.infinispan.sample; import org.infinispan.api.annotations.indexing.Text; public class Author { @Text String name; @Text String surname; } Procedure Configure Data Grid to index the "books" cache and specify org.infinispan.sample.Book as the entity to index. <distributed-cache name="books"> <indexing path="USD{user.home}/index"> <indexed-entities> <indexed-entity>org.infinispan.sample.Book</indexed-entity> </indexed-entities> </indexing> </distributed-cache> Obtain the cache. import org.infinispan.Cache; import org.infinispan.manager.DefaultCacheManager; import org.infinispan.manager.EmbeddedCacheManager; EmbeddedCacheManager manager = new DefaultCacheManager("infinispan.xml"); Cache<String, Book> cache = manager.getCache("books"); Perform queries for fields in the Book instances that are stored in the Data Grid cache, as in the following example: // Get the query factory from the cache QueryFactory queryFactory = org.infinispan.query.Search.getQueryFactory(cache); // Create an Ickle query that performs a full-text search using the ':' operator on the 'title' and 'authors.name' fields // You can perform full-text search only on indexed caches Query<Book> fullTextQuery = queryFactory.create("FROM org.infinispan.sample.Book b WHERE b.title:'infinispan' AND b.authors.name:'sanne'"); // Use the '=' operator to query fields in caches that are indexed or not // Non full-text operators apply only to fields that are not analyzed Query<Book> exactMatchQuery=queryFactory.create("FROM org.infinispan.sample.Book b WHERE b.isbn = '12345678' AND b.authors.name : 'sanne'"); // You can use full-text and non-full text operators in the same query Query<Book> query=queryFactory.create("FROM org.infinispan.sample.Book b where b.authors.name : 'Stephen' and b.description : (+'dark' -'tower')"); // Get the results List<Book> found=query.execute().list(); 4.2. Entity mapping annotations Add annotations to your Java classes to map your entities to indexes. Hibernate Search API Data Grid uses the Hibernate Search API to define fine grained configuration for indexing at entity level. This configuration includes which fields are annotated, which analyzers should be used, how to map nested objects, and so on. The following sections provide information that applies to entity mapping annotations for use with Data Grid. For complete detail about these annotations, you should refer to the Hibernate Search manual . @DocumentId Unlike Hibernate Search, using @DocumentId to mark a field as identifier does not apply to Data Grid values; in Data Grid the identifier for all @Indexed objects is the key used to store the value. You can still customize how the key is indexed using a combination of @Transformable , custom types and custom FieldBridge implementations. @Transformable keys The key for each value needs to be indexed as well, and the key instance must be transformed in a String . Data Grid includes some default transformation routines to encode common primitives, but to use a custom key you must provide an implementation of org.infinispan.query.Transformer . Registering a key Transformer via annotations You can annotate your key class with org.infinispan.query.Transformable and your custom transformer implementation will be picked up automatically: @Transformable(transformer = CustomTransformer.class) public class CustomKey { ... } public class CustomTransformer implements Transformer { @Override public Object fromString(String s) { ... return new CustomKey(...); } @Override public String toString(Object customType) { CustomKey ck = (CustomKey) customType; return ... } } Registering a key Transformer via the cache indexing configuration Use the key-transformers xml element in both embedded and server config: <replicated-cache name="test"> <indexing auto-config="true"> <key-transformers> <key-transformer key="com.mycompany.CustomKey" transformer="com.mycompany.CustomTransformer"/> </key-transformers> </indexing> </replicated-cache> Alternatively, use the Java configuration API (embedded mode): ConfigurationBuilder builder = ... builder.indexing().enable() .addKeyTransformer(CustomKey.class, CustomTransformer.class);
|
[
"package org.infinispan.sample; import java.time.LocalDate; import java.util.HashSet; import java.util.Set; import org.infinispan.api.annotations.indexing.*; // Annotate values with @Indexed to add them to indexes // Annotate each field according to how you want to index it @Indexed public class Book { @Keyword String title; @Text String description; @Keyword String isbn; @Basic LocalDate publicationDate; @Embedded Set<Author> authors = new HashSet<Author>(); }",
"package org.infinispan.sample; import org.infinispan.api.annotations.indexing.Text; public class Author { @Text String name; @Text String surname; }",
"<distributed-cache name=\"books\"> <indexing path=\"USD{user.home}/index\"> <indexed-entities> <indexed-entity>org.infinispan.sample.Book</indexed-entity> </indexed-entities> </indexing> </distributed-cache>",
"import org.infinispan.Cache; import org.infinispan.manager.DefaultCacheManager; import org.infinispan.manager.EmbeddedCacheManager; EmbeddedCacheManager manager = new DefaultCacheManager(\"infinispan.xml\"); Cache<String, Book> cache = manager.getCache(\"books\");",
"// Get the query factory from the cache QueryFactory queryFactory = org.infinispan.query.Search.getQueryFactory(cache); // Create an Ickle query that performs a full-text search using the ':' operator on the 'title' and 'authors.name' fields // You can perform full-text search only on indexed caches Query<Book> fullTextQuery = queryFactory.create(\"FROM org.infinispan.sample.Book b WHERE b.title:'infinispan' AND b.authors.name:'sanne'\"); // Use the '=' operator to query fields in caches that are indexed or not // Non full-text operators apply only to fields that are not analyzed Query<Book> exactMatchQuery=queryFactory.create(\"FROM org.infinispan.sample.Book b WHERE b.isbn = '12345678' AND b.authors.name : 'sanne'\"); // You can use full-text and non-full text operators in the same query Query<Book> query=queryFactory.create(\"FROM org.infinispan.sample.Book b where b.authors.name : 'Stephen' and b.description : (+'dark' -'tower')\"); // Get the results List<Book> found=query.execute().list();",
"@Transformable(transformer = CustomTransformer.class) public class CustomKey { } public class CustomTransformer implements Transformer { @Override public Object fromString(String s) { return new CustomKey(...); } @Override public String toString(Object customType) { CustomKey ck = (CustomKey) customType; return } }",
"<replicated-cache name=\"test\"> <indexing auto-config=\"true\"> <key-transformers> <key-transformer key=\"com.mycompany.CustomKey\" transformer=\"com.mycompany.CustomTransformer\"/> </key-transformers> </indexing> </replicated-cache>",
"ConfigurationBuilder builder = builder.indexing().enable() .addKeyTransformer(CustomKey.class, CustomTransformer.class);"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/querying_data_grid_caches/query-embedded
|
19.4. Configuration Examples
|
19.4. Configuration Examples 19.4.1. Squid Connecting to Non-Standard Ports The following example provides a real-world demonstration of how SELinux complements Squid by enforcing the above Boolean and by default only allowing access to certain ports. This example will then demonstrate how to change the Boolean and show that access is then allowed. Note that this is an example only and demonstrates how SELinux can affect a simple configuration of Squid. Comprehensive documentation of Squid is beyond the scope of this document. See the official Squid documentation for further details. This example assumes that the Squid host has two network interfaces, Internet access, and that any firewall has been configured to allow access on the internal interface using the default TCP port on which Squid listens (TCP 3128). Confirm that the squid is installed: If the package is not installed, use the yum utility as root to install it: Edit the main configuration file, /etc/squid/squid.conf , and confirm that the cache_dir directive is uncommented and looks similar to the following: This line specifies the default settings for the cache_dir directive to be used in this example; it consists of the Squid storage format ( ufs ), the directory on the system where the cache resides ( /var/spool/squid ), the amount of disk space in megabytes to be used for the cache ( 100 ), and finally the number of first-level and second-level cache directories to be created ( 16 and 256 respectively). In the same configuration file, make sure the http_access allow localnet directive is uncommented. This allows traffic from the localnet ACL which is automatically configured in a default installation of Squid on Red Hat Enterprise Linux. It will allow client machines on any existing RFC1918 network to have access through the proxy, which is sufficient for this simple example. In the same configuration file, make sure the visible_hostname directive is uncommented and is configured to the host name of the machine. The value should be the fully qualified domain name (FQDN) of the host: As root, enter the following command to start the squid daemon. As this is the first time squid has started, this command will initialise the cache directories as specified above in the cache_dir directive and will then start the daemon: Ensure that squid starts successfully. The output will include the information below, only the time stamp will differ: Confirm that the squid process ID (PID) has started as a confined service, as seen here by the squid_var_run_t value: At this point, a client machine connected to the localnet ACL configured earlier is successfully able to use the internal interface of this host as its proxy. This can be configured in the settings for all common web browsers, or system-wide. Squid is now listening on the default port of the target machine (TCP 3128), but the target machine will only allow outgoing connections to other services on the Internet through common ports. This is a policy defined by SELinux itself. SELinux will deny access to non-standard ports, as shown in the step: When a client makes a request using a non-standard port through the Squid proxy such as a website listening on TCP port 10000, a denial similar to the following is logged: To allow this access, the squid_connect_any Boolean must be modified, as it is disabled by default: Note Do not use the -P option if you do not want setsebool changes to persist across reboots. The client will now be able to access non-standard ports on the Internet as Squid is now permitted to initiate connections to any port, on behalf of its clients.
|
[
"~]USD rpm -q squid package squid is not installed",
"~]# yum install squid",
"cache_dir ufs /var/spool/squid 100 16 256",
"visible_hostname squid.example.com",
"~]# systemctl start squid.service",
"~]# systemctl status squid.service squid.service - Squid caching proxy Loaded: loaded (/usr/lib/systemd/system/squid.service; disabled) Active: active (running) since Thu 2014-02-06 15:00:24 CET; 6s ago",
"~]# ls -lZ /var/run/squid.pid -rw-r--r--. root squid unconfined_u:object_r: squid_var_run_t :s0 /var/run/squid.pid",
"SELinux is preventing the squid daemon from connecting to network port 10000",
"~]# setsebool -P squid_connect_any on"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-squid_caching_proxy-configuration_examples
|
Chapter 14. Backup and restore
|
Chapter 14. Backup and restore 14.1. Backup and restore by using VM snapshots You can back up and restore virtual machines (VMs) by using snapshots. Snapshots are supported by the following storage providers: Red Hat OpenShift Data Foundation Any other cloud storage provider with the Container Storage Interface (CSI) driver that supports the Kubernetes Volume Snapshot API Online snapshots have a default time deadline of five minutes ( 5m ) that can be changed, if needed. Important Online snapshots are supported for virtual machines that have hot plugged virtual disks. However, hot plugged disks that are not in the virtual machine specification are not included in the snapshot. To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent if it is not included with your operating system. The QEMU guest agent is included with the default Red Hat templates. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. 14.1.1. About snapshots A snapshot represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a development version. A VM snapshot is created from a VM that is powered off (Stopped state) or powered on (Running state). When taking a snapshot of a running VM, the controller checks that the QEMU guest agent is installed and running. If so, it freezes the VM file system before taking the snapshot, and thaws the file system after the snapshot is taken. The snapshot stores a copy of each Container Storage Interface (CSI) volume attached to the VM and a copy of the VM specification and metadata. Snapshots cannot be changed after creation. You can perform the following snapshot actions: Create a new snapshot Create a copy of a virtual machine from a snapshot List all snapshots attached to a specific VM Restore a VM from a snapshot Delete an existing VM snapshot VM snapshot controller and custom resources The VM snapshot feature introduces three new API objects defined as custom resource definitions (CRDs) for managing snapshots: VirtualMachineSnapshot : Represents a user request to create a snapshot. It contains information about the current state of the VM. VirtualMachineSnapshotContent : Represents a provisioned resource on the cluster (a snapshot). It is created by the VM snapshot controller and contains references to all resources required to restore the VM. VirtualMachineRestore : Represents a user request to restore a VM from a snapshot. The VM snapshot controller binds a VirtualMachineSnapshotContent object with the VirtualMachineSnapshot object for which it was created, with a one-to-one mapping. 14.1.2. About application-consistent snapshots and backups You can configure application-consistent snapshots and backups for Linux or Windows virtual machines (VMs) through a cycle of freezing and thawing. For any application, you can either configure a script on a Linux VM or register on a Windows VM to be notified when a snapshot or backup is due to begin. On a Linux VM, freeze and thaw processes trigger automatically when a snapshot is taken or a backup is started by using, for example, a plugin from Velero or another backup vendor. The freeze process, performed by QEMU Guest Agent (QEMU GA) freeze hooks, ensures that before the snapshot or backup of a VM occurs, all of the VM's filesystems are frozen and each appropriately configured application is informed that a snapshot or backup is about to start. This notification affords each application the opportunity to quiesce its state. Depending on the application, quiescing might involve temporarily refusing new requests, finishing in-progress operations, and flushing data to disk. The operating system is then directed to quiesce the filesystems by flushing outstanding writes to disk and freezing new write activity. All new connection requests are refused. When all applications have become inactive, the QEMU GA freezes the filesystems, and a snapshot is taken or a backup initiated. After the taking of the snapshot or start of the backup, the thawing process begins. Filesystems writing is reactivated and applications receive notification to resume normal operations. The same cycle of freezing and thawing is available on a Windows VM. Applications register with the Volume Shadow Copy Service (VSS) to receive notifications that they should flush out their data because a backup or snapshot is imminent. Thawing of the applications after the backup or snapshot is complete returns them to an active state. For more details, see the Windows Server documentation about the Volume Shadow Copy Service. 14.1.3. Creating snapshots You can create snapshots of virtual machines (VMs) by using the OpenShift Container Platform web console or the command line. 14.1.3.1. Creating a snapshot by using the web console You can create a snapshot of a virtual machine (VM) by using the OpenShift Container Platform web console. The VM snapshot includes disks that meet the following requirements: Either a data volume or a persistent volume claim Belong to a storage class that supports Container Storage Interface (CSI) volume snapshots Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click the Snapshots tab and then click Take Snapshot . Enter the snapshot name. Expand Disks included in this Snapshot to see the storage volumes to be included in the snapshot. If your VM has disks that cannot be included in the snapshot and you wish to proceed, select I am aware of this warning and wish to proceed . Click Save . 14.1.3.2. Creating a snapshot by using the command line You can create a virtual machine (VM) snapshot for an offline or online VM by creating a VirtualMachineSnapshot object. Prerequisites Ensure that the persistent volume claims (PVCs) are in a storage class that supports Container Storage Interface (CSI) volume snapshots. Install the OpenShift CLI ( oc ). Optional: Power down the VM for which you want to create a snapshot. Procedure Create a YAML file to define a VirtualMachineSnapshot object that specifies the name of the new VirtualMachineSnapshot and the name of the source VM as in the following example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> Create the VirtualMachineSnapshot object: USD oc create -f <snapshot_name>.yaml The snapshot controller creates a VirtualMachineSnapshotContent object, binds it to the VirtualMachineSnapshot , and updates the status and readyToUse fields of the VirtualMachineSnapshot object. Optional: If you are taking an online snapshot, you can use the wait command and monitor the status of the snapshot: Enter the following command: USD oc wait <vm_name> <snapshot_name> --for condition=Ready Verify the status of the snapshot: InProgress - The online snapshot operation is still in progress. Succeeded - The online snapshot operation completed successfully. Failed - The online snapshot operaton failed. Note Online snapshots have a default time deadline of five minutes ( 5m ). If the snapshot does not complete successfully in five minutes, the status is set to failed . Afterwards, the file system will be thawed and the VM unfrozen but the status remains failed until you delete the failed snapshot image. To change the default time deadline, add the FailureDeadline attribute to the VM snapshot spec with the time designated in minutes ( m ) or in seconds ( s ) that you want to specify before the snapshot operation times out. To set no deadline, you can specify 0 , though this is generally not recommended, as it can result in an unresponsive VM. If you do not specify a unit of time such as m or s , the default is seconds ( s ). Verification Verify that the VirtualMachineSnapshot object is created and bound with VirtualMachineSnapshotContent and that the readyToUse flag is set to true : USD oc describe vmsnapshot <snapshot_name> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: "2020-09-30T14:41:51Z" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: "3897" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "False" 1 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:42:03Z" reason: Operation complete status: "True" 2 type: Ready creationTime: "2020-09-30T14:42:03Z" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4 1 The status field of the Progressing condition specifies if the snapshot is still being created. 2 The status field of the Ready condition specifies if the snapshot creation process is complete. 3 Specifies if the snapshot is ready to be used. 4 Specifies that the snapshot is bound to a VirtualMachineSnapshotContent object created by the snapshot controller. Check the spec:volumeBackups property of the VirtualMachineSnapshotContent resource to verify that the expected PVCs are included in the snapshot. 14.1.4. Verifying online snapshots by using snapshot indications Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation. Prerequisites You must have attempted to create an online VM snapshot. Procedure Display the output from the snapshot indications by performing one of the following actions: Use the command line to view indicator output in the status stanza of the VirtualMachineSnapshot object YAML. In the web console, click VirtualMachineSnapshot Status in the Snapshot details screen. Verify the status of your online VM snapshot by viewing the values of the status.indications parameter: Online indicates that the VM was running during online snapshot creation. GuestAgent indicates that the QEMU guest agent was running during online snapshot creation. NoGuestAgent indicates that the QEMU guest agent was not running during online snapshot creation. The QEMU guest agent could not be used to freeze and thaw the file system, either because the QEMU guest agent was not installed or running or due to another error. 14.1.5. Restoring virtual machines from snapshots You can restore virtual machines (VMs) from snapshots by using the OpenShift Container Platform web console or the command line. 14.1.5.1. Restoring a VM from a snapshot by using the web console You can restore a virtual machine (VM) to a configuration represented by a snapshot in the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. If the VM is running, click the options menu and select Stop to power it down. Click the Snapshots tab to view a list of snapshots associated with the VM. Select a snapshot to open the Snapshot Details screen. Click the options menu and select Restore VirtualMachine from snapshot . Click Restore . 14.1.5.2. Restoring a VM from a snapshot by using the command line You can restore an existing virtual machine (VM) to a configuration by using the command line. You can only restore from an offline VM snapshot. Prerequisites Power down the VM you want to restore. Procedure Create a YAML file to define a VirtualMachineRestore object that specifies the name of the VM you want to restore and the name of the snapshot to be used as the source as in the following example: apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name> Create the VirtualMachineRestore object: USD oc create -f <vm_restore>.yaml The snapshot controller updates the status fields of the VirtualMachineRestore object and replaces the existing VM configuration with the snapshot content. Verification Verify that the VM is restored to the state represented by the snapshot and that the complete flag is set to true : USD oc get vmrestore <vm_restore> Example output apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: "2020-09-30T14:46:27Z" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: "5512" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "False" 2 type: Progressing - lastProbeTime: null lastTransitionTime: "2020-09-30T14:46:28Z" reason: Operation complete status: "True" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: "2020-09-30T14:46:28Z" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1 1 Specifies if the process of restoring the VM to the state represented by the snapshot is complete. 2 The status field of the Progressing condition specifies if the VM is still being restored. 3 The status field of the Ready condition specifies if the VM restoration process is complete. 14.1.6. Deleting snapshots You can delete snapshots of virtual machines (VMs) by using the OpenShift Container Platform web console or the command line. 14.1.6.1. Deleting a snapshot by using the web console You can delete an existing virtual machine (VM) snapshot by using the web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click the Snapshots tab to view a list of snapshots associated with the VM. Click the options menu beside a snapshot and select Delete snapshot . Click Delete . 14.1.6.2. Deleting a virtual machine snapshot in the CLI You can delete an existing virtual machine (VM) snapshot by deleting the appropriate VirtualMachineSnapshot object. Prerequisites Install the OpenShift CLI ( oc ). Procedure Delete the VirtualMachineSnapshot object: USD oc delete vmsnapshot <snapshot_name> The snapshot controller deletes the VirtualMachineSnapshot along with the associated VirtualMachineSnapshotContent object. Verification Verify that the snapshot is deleted and no longer attached to this VM: USD oc get vmsnapshot 14.1.7. Additional resources CSI Volume Snapshots 14.2. Backing up and restoring virtual machines Important Red Hat supports using OpenShift Virtualization 4.14 or later with OADP 1.3.x or later. OADP versions earlier than 1.3.0 are not supported for back up and restore of OpenShift Virtualization. Back up and restore virtual machines by using the OpenShift API for Data Protection . You can install the OpenShift API for Data Protection (OADP) with OpenShift Virtualization by installing the OADP Operator and configuring a backup location. You can then install the Data Protection Application. Note OpenShift API for Data Protection with OpenShift Virtualization supports the following backup and restore storage options: Container Storage Interface (CSI) backups Container Storage Interface (CSI) backups with DataMover The following storage options are excluded: File system backup and restore Volume snapshot backup and restore For more information, see Backing up applications with File System Backup: Kopia or Restic . To install the OADP Operator in a restricted network environment, you must first disable the default OperatorHub sources and mirror the Operator catalog. See Using Operator Lifecycle Manager on restricted networks for details. 14.2.1. Installing and configuring OADP with OpenShift Virtualization As a cluster administrator, you install OADP by installing the OADP Operator. The latest version of the OADP Operator installs Velero 1.14 . Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Install the OADP Operator according to the instructions for your storage provider. Install the Data Protection Application (DPA) with the kubevirt and openshift OADP plugins. Back up virtual machines by creating a Backup custom resource (CR). Warning Red Hat support is limited to only the following options: CSI backups CSI backups with DataMover. You restore the Backup CR by creating a Restore CR. Additional resources OADP plugins Backup custom resource (CR) Restore CR Using Operator Lifecycle Manager on restricted networks 14.2.2. Installing the Data Protection Application You install the Data Protection Application (DPA) by creating an instance of the DataProtectionApplication API. Prerequisites You must install the OADP Operator. You must configure object storage as a backup location. If you use snapshots to back up PVs, your cloud provider must support either a native snapshot API or Container Storage Interface (CSI) snapshots. If the backup and snapshot locations use the same credentials, you must create a Secret with the default name, cloud-credentials . If the backup and snapshot locations use different credentials, you must create two Secrets : Secret with a custom name for the backup location. You add this Secret to the DataProtectionApplication CR. Secret with another custom name for the snapshot location. You add this Secret to the DataProtectionApplication CR. Note If you do not want to specify backup or snapshot locations during the installation, you can create a default Secret with an empty credentials-velero file. If there is no default Secret , the installation will fail. Procedure Click Operators Installed Operators and select the OADP Operator. Under Provided APIs , click Create instance in the DataProtectionApplication box. Click YAML View and update the parameters of the DataProtectionApplication manifest: apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14 1 The default namespace for OADP is openshift-adp . The namespace is a variable and is configurable. 2 The kubevirt plugin is mandatory for OpenShift Virtualization. 3 Specify the plugin for the backup provider, for example, gcp , if it exists. 4 The csi plugin is mandatory for backing up PVs with CSI snapshots. The csi plugin uses the Velero CSI beta snapshot APIs . You do not need to configure a snapshot location. 5 The openshift plugin is mandatory. 6 Specify how many minutes to wait for several Velero resources before timeout occurs, such as Velero CRD availability, volumeSnapshot deletion, and backup repository availability. The default is 10m. 7 The administrative agent that routes the administrative requests to servers. 8 Set this value to true if you want to enable nodeAgent and perform File System Backup. 9 Enter kopia as your uploader to use the Built-in DataMover. The nodeAgent deploys a daemon set, which means that the nodeAgent pods run on each working node. You can configure File System Backup by adding spec.defaultVolumesToFsBackup: true to the Backup CR. 10 Specify the nodes on which Kopia are available. By default, Kopia runs on all nodes. 11 Specify the backup provider. 12 Specify the correct default name for the Secret , for example, cloud-credentials-gcp , if you use a default plugin for the backup provider. If specifying a custom name, then the custom name is used for the backup location. If you do not specify a Secret name, the default name is used. 13 Specify a bucket as the backup storage location. If the bucket is not a dedicated bucket for Velero backups, you must specify a prefix. 14 Specify a prefix for Velero backups, for example, velero , if the bucket is used for multiple purposes. Click Create . Verification Verify the installation by viewing the OpenShift API for Data Protection (OADP) resources by running the following command: USD oc get all -n openshift-adp Example output Verify that the DataProtectionApplication (DPA) is reconciled by running the following command: USD oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}' Example output {"conditions":[{"lastTransitionTime":"2023-10-27T01:23:57Z","message":"Reconcile complete","reason":"Complete","status":"True","type":"Reconciled"}]} Verify the type is set to Reconciled . Verify the backup storage location and confirm that the PHASE is Available by running the following command: USD oc get backupstoragelocations.velero.io -n openshift-adp Example output NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true 14.3. Disaster recovery OpenShift Virtualization supports using disaster recovery (DR) solutions to ensure that your environment can recover after a site outage. To use these methods, you must plan your OpenShift Virtualization deployment in advance. 14.3.1. About disaster recovery methods For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the Red Hat OpenShift Virtualization disaster recovery guide in the Red Hat Knowledgebase. The two primary DR methods for OpenShift Virtualization are Metropolitan Disaster Recovery (Metro-DR) and Regional-DR. 14.3.1.1. Metro-DR Metro-DR uses synchronous replication. It writes to storage at both the primary and secondary sites so that the data is always synchronized between sites. Because the storage provider is responsible for ensuring that the synchronization succeeds, the environment must meet the throughput and latency requirements of the storage provider. 14.3.1.2. Regional-DR Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites. Important Regional-DR is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 14.3.2. Defining applications for disaster recovery Define applications for disaster recovery by using VMs that Red Hat Advanced Cluster Management (RHACM) manages or discovers. 14.3.2.1. Best practices when defining an RHACM-managed VM An RHACM-managed application that includes a VM must be created by using a GitOps workflow and by creating an RHACM application or ApplicationSet . There are several actions you can take to improve your experience and chance of success when defining an RHACM-managed VM. Use a PVC and populator to define storage for the VM Because data volumes create persistent volume claims (PVCs) implicitly, data volumes and VMs with data volume templates do not fit as neatly into the GitOps model. Use the import method when choosing a population source for your VM disk Use the import method to work around limitations in Regional-DR that prevent you from protecting VMs that use cloned PVCs. Select a RHEL image from the software catalog to use the import method. Red Hat recommends using a specific version of the image rather than a floating tag for consistent results. The KubeVirt community maintains container disks for other operating systems in a Quay repository. Use pullMethod: node Use the pod pullMethod: node when creating a data volume from a registry source to take advantage of the OpenShift Container Platform pull secret, which is required to pull container images from the Red Hat registry. 14.3.2.2. Best practices when defining an RHACM-discovered virtual machine You can configure any VM in the cluster that is not an RHACM-managed application as an RHACM-discovered application. This includes VMs imported by using the Migration Toolkit for Virtualization (MTV), VMs created by using the OpenShift Virtualization web console, or VMs created by any other means, such as the CLI. There are several actions you can take to improve your experience and chance of success when defining an RHACM-discovered VM. Protect the VM when using MTV, the OpenShift Virtualization web console, or a custom VM Because automatic labeling is not currently available, the application owner must manually label the components of the VM application when using MTV, the OpenShift Virtualization web console, or a custom VM. After creating the VM, apply a common label to the following resources associated with the VM: VirtualMachine , DataVolume , PersistentVolumeClaim , Service , Route , Secret , and ConfigMap . Do not label virtual machine instances (VMIs) or pods since OpenShift Virtualization creates and manages these automatically. Include more than the VirtualMachine object in the VM Working VMs typically also contain data volumes, persistent volume claims (PVCs), services, routes, secrets, ConfigMap objects, and VirtualMachineSnapshot objects. Include the VM as part of a larger logical application This includes other pod-based workloads and VMs. 14.3.3. VM behavior during disaster recovery scenarios VMs typically act similarly to pod-based workloads during both relocate and failover disaster recovery flows. Relocate Use relocate to move an application from the primary environment to the secondary environment when the primary environment is still accessible. During relocate, the VM is gracefully terminated, any unreplicated data is synchronized to the secondary environment, and the VM starts in the secondary environment. Becauase the terminates gracefully, there is no data loss in this scenario. Therefore, the VM operating system does not need to perform crash recovery. Failover Use failover when there is a critical failure in the primary environment that makes it impractical or impossible to use relocation to move the workload to a secondary environment. When failover is executed, the storage is fenced from the primary environment, the I/O to the VM disks is abruptly halted, and the VM restarts in the secondary environment using the replicated data. You should expect data loss due to failover. The extent of loss depends on whether you use Metro-DR, which uses synchronous replication, or Regional-DR, which uses asynchronous replication. Because Regional-DR uses snapshot-based replication intervals, the window of data loss is proportional to the replication interval length. When the VM restarts, the operating system might perform crash recovery. 14.3.4. Metro-DR for Red Hat OpenShift Data Foundation OpenShift Virtualization supports the Metro-DR solution for OpenShift Data Foundation , which provides two-way synchronous data replication between managed OpenShift Virtualization clusters installed on primary and secondary sites. This solution combines Red Hat Advanced Cluster Management (RHACM), Red Hat Ceph Storage, and OpenShift Data Foundation components. Use this solution during a site disaster to failover applications from the primary to the secondary site, and relocate the applications back to the primary site after restoring the disaster site. This synchronous solution is only available to metropolitan distance data centers with a 10-millisecond latency or less. For more information about using the Metro-DR solution for OpenShift Data Foundation with OpenShift Virtualization, see the Red Hat Knowledgebase or IBM's OpenShift Data Foundation Metro-DR documentation. Additional resources Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads Additional resources Red Hat Advanced Cluster Management for Kubernetes 2.10
|
[
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: <snapshot_name> spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name>",
"oc create -f <snapshot_name>.yaml",
"oc wait <vm_name> <snapshot_name> --for condition=Ready",
"oc describe vmsnapshot <snapshot_name>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: <vm_restore> spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: <vm_name> virtualMachineSnapshotName: <snapshot_name>",
"oc create -f <vm_restore>.yaml",
"oc get vmrestore <vm_restore>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1",
"oc delete vmsnapshot <snapshot_name>",
"oc get vmsnapshot",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/virtualization/backup-and-restore
|
Chapter 14. Updating hardware on nodes running on vSphere
|
Chapter 14. Updating hardware on nodes running on vSphere You must ensure that your nodes running in vSphere are running on the hardware version supported by OpenShift Container Platform. Currently, hardware version 15 or later is supported for vSphere virtual machines in a cluster. You can update your virtual hardware immediately or schedule an update in vCenter. Important Version 4.12 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. 14.1. Updating virtual hardware on vSphere To update the hardware of your virtual machines (VMs) on VMware vSphere, update your virtual machines separately to reduce the risk of downtime for your cluster. 14.1.1. Updating the virtual hardware for control plane nodes on vSphere To reduce the risk of downtime, it is recommended that control plane nodes be updated serially. This ensures that the Kubernetes API remains available and etcd retains quorum. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure List the control plane nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/master Example output NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.25.0 control-plane-node-1 Ready master 75m v1.25.0 control-plane-node-2 Ready master 75m v1.25.0 Note the names of your control plane nodes. Mark the control plane node as unschedulable. USD oc adm cordon <control_plane_node> Shut down the virtual machine (VM) associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Power on the VM associated with the control plane node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<control_plane_node> Mark the control plane node as schedulable again: USD oc adm uncordon <control_plane_node> Repeat this procedure for each control plane node in your cluster. 14.1.2. Updating the virtual hardware for compute nodes on vSphere To reduce the risk of downtime, it is recommended that compute nodes be updated serially. Note Multiple compute nodes can be updated in parallel given workloads are tolerant of having multiple nodes in a NotReady state. It is the responsibility of the administrator to ensure that the required compute nodes are available. Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure List the compute nodes in your cluster. USD oc get nodes -l node-role.kubernetes.io/worker Example output NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.25.0 compute-node-1 Ready worker 30m v1.25.0 compute-node-2 Ready worker 30m v1.25.0 Note the names of your compute nodes. Mark the compute node as unschedulable: USD oc adm cordon <compute_node> Evacuate the pods from the compute node. There are several ways to do this. For example, you can evacuate all or selected pods on a node: USD oc adm drain <compute_node> [--pod-selector=<pod_selector>] See the "Understanding how to evacuate pods on nodes" section for other options to evacuate pods from a node. Shut down the virtual machine (VM) associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power Shut Down Guest OS . Do not shut down the VM using Power Off because it might not shut down safely. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Power on the VM associated with the compute node. Do this in the vSphere client by right-clicking the VM and selecting Power On . Wait for the node to report as Ready : USD oc wait --for=condition=Ready node/<compute_node> Mark the compute node as schedulable again: USD oc adm uncordon <compute_node> Repeat this procedure for each compute node in your cluster. 14.1.3. Updating the virtual hardware for template on vSphere Prerequisites You have cluster administrator permissions to execute the required permissions in the vCenter instance hosting your OpenShift Container Platform cluster. Your vSphere ESXi hosts are version 7.0U2 or later. Procedure If the RHCOS template is configured as a vSphere template follow Convert a Template to a Virtual Machine in the VMware documentation prior to the step. Note Once converted from a template, do not power on the virtual machine. Update the VM in the vSphere client. Follow Upgrade the Compatibility of a Virtual Machine Manually in the VMware documentation for more information. Convert the VM in the vSphere client from a VM to template. Follow Convert a Virtual Machine to a Template in the vSphere Client in the VMware documentation for more information. Additional resources Understanding how to evacuate pods on nodes 14.2. Scheduling an update for virtual hardware on vSphere Virtual hardware updates can be scheduled to occur when a virtual machine is powered on or rebooted. You can schedule your virtual hardware updates exclusively in vCenter by following Schedule a Compatibility Upgrade for a Virtual Machine in the VMware documentation. When scheduling an upgrade prior to performing an upgrade of OpenShift Container Platform, the virtual hardware update occurs when the nodes are rebooted during the course of the OpenShift Container Platform upgrade.
|
[
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION control-plane-node-0 Ready master 75m v1.25.0 control-plane-node-1 Ready master 75m v1.25.0 control-plane-node-2 Ready master 75m v1.25.0",
"oc adm cordon <control_plane_node>",
"oc wait --for=condition=Ready node/<control_plane_node>",
"oc adm uncordon <control_plane_node>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION compute-node-0 Ready worker 30m v1.25.0 compute-node-1 Ready worker 30m v1.25.0 compute-node-2 Ready worker 30m v1.25.0",
"oc adm cordon <compute_node>",
"oc adm drain <compute_node> [--pod-selector=<pod_selector>]",
"oc wait --for=condition=Ready node/<compute_node>",
"oc adm uncordon <compute_node>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/updating_clusters/updating-hardware-on-nodes-running-on-vsphere
|
Appendix D. Command-Line Tools
|
Appendix D. Command-Line Tools AMQ Broker includes a set of command-line interface (CLI) tools so you can manage your messaging journal. The table below lists the name for each tool and its description. Tool Description exp Exports the message data using a special and independent XML format. imp Imports the journal to a running broker using the output provided by exp . data Prints reports about journal records and compacts their data. encode Shows an internal format of the journal encoded to String. decode Imports the internal journal format from encode. For a full list of commands available for each tool, use the help parameter followed by the tool's name. In the example below, the CLI output lists all the commands available to the data tool after the user entered the command ./artemis help data . You can use the help at the tool for more information on how to execute each of the tool's commands. For example, the CLI lists more information about the data print command after the user enters the ./artemis help data print .
|
[
"./artemis help data NAME artemis data - data tools group (print|imp|exp|encode|decode|compact) (example ./artemis data print) SYNOPSIS artemis data artemis data compact [--broker <brokerConfig>] [--verbose] [--paging <paging>] [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] artemis data decode [--broker <brokerConfig>] [--suffix <suffix>] [--verbose] [--paging <paging>] [--prefix <prefix>] [--file-size <size>] [--directory <directory>] --input <input> [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] artemis data encode [--directory <directory>] [--broker <brokerConfig>] [--suffix <suffix>] [--verbose] [--paging <paging>] [--prefix <prefix>] [--file-size <size>] [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] artemis data exp [--broker <brokerConfig>] [--verbose] [--paging <paging>] [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] artemis data imp [--host <host>] [--verbose] [--port <port>] [--password <password>] [--transaction] --input <input> [--user <user>] artemis data print [--broker <brokerConfig>] [--verbose] [--paging <paging>] [--journal <journal>] [--large-messages <largeMessges>] [--bindings <binding>] COMMANDS With no arguments, Display help information print Print data records information (WARNING: don't use while a production server is running)",
"./artemis help data print NAME artemis data print - Print data records information (WARNING: don't use while a production server is running) SYNOPSIS artemis data print [--bindings <binding>] [--journal <journal>] [--paging <paging>] OPTIONS --bindings <binding> The folder used for bindings (default ../data/bindings) --journal <journal> The folder used for messages journal (default ../data/journal) --paging <paging> The folder used for paging (default ../data/paging)"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/cli_tools
|
Chapter 2. ConsoleCLIDownload [console.openshift.io/v1]
|
Chapter 2. ConsoleCLIDownload [console.openshift.io/v1] Description ConsoleCLIDownload is an extension for configuring openshift web console command line interface (CLI) downloads. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleCLIDownloadSpec is the desired cli download configuration. 2.1.1. .spec Description ConsoleCLIDownloadSpec is the desired cli download configuration. Type object Required description displayName links Property Type Description description string description is the description of the CLI download (can include markdown). displayName string displayName is the display name of the CLI download. links array links is a list of objects that provide CLI download link details. links[] object 2.1.2. .spec.links Description links is a list of objects that provide CLI download link details. Type array 2.1.3. .spec.links[] Description Type object Required href Property Type Description href string href is the absolute secure URL for the link (must use https) text string text is the display text for the link 2.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleclidownloads DELETE : delete collection of ConsoleCLIDownload GET : list objects of kind ConsoleCLIDownload POST : create a ConsoleCLIDownload /apis/console.openshift.io/v1/consoleclidownloads/{name} DELETE : delete a ConsoleCLIDownload GET : read the specified ConsoleCLIDownload PATCH : partially update the specified ConsoleCLIDownload PUT : replace the specified ConsoleCLIDownload /apis/console.openshift.io/v1/consoleclidownloads/{name}/status GET : read status of the specified ConsoleCLIDownload PATCH : partially update status of the specified ConsoleCLIDownload PUT : replace status of the specified ConsoleCLIDownload 2.2.1. /apis/console.openshift.io/v1/consoleclidownloads Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsoleCLIDownload Table 2.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleCLIDownload Table 2.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownloadList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleCLIDownload Table 2.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.7. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.8. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 202 - Accepted ConsoleCLIDownload schema 401 - Unauthorized Empty 2.2.2. /apis/console.openshift.io/v1/consoleclidownloads/{name} Table 2.9. Global path parameters Parameter Type Description name string name of the ConsoleCLIDownload Table 2.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsoleCLIDownload Table 2.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.12. Body parameters Parameter Type Description body DeleteOptions schema Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleCLIDownload Table 2.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.15. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleCLIDownload Table 2.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.17. Body parameters Parameter Type Description body Patch schema Table 2.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleCLIDownload Table 2.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.20. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 401 - Unauthorized Empty 2.2.3. /apis/console.openshift.io/v1/consoleclidownloads/{name}/status Table 2.22. Global path parameters Parameter Type Description name string name of the ConsoleCLIDownload Table 2.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ConsoleCLIDownload Table 2.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 2.25. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleCLIDownload Table 2.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.27. Body parameters Parameter Type Description body Patch schema Table 2.28. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleCLIDownload Table 2.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.30. Body parameters Parameter Type Description body ConsoleCLIDownload schema Table 2.31. HTTP responses HTTP code Reponse body 200 - OK ConsoleCLIDownload schema 201 - Created ConsoleCLIDownload schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/console_apis/consoleclidownload-console-openshift-io-v1
|
Chapter 13. Troubleshooting builds
|
Chapter 13. Troubleshooting builds Use the following to troubleshoot build issues. 13.1. Resolving denial for access to resources If your request for access to resources is denied: Issue A build fails with: requested access to the resource is denied Resolution You have exceeded one of the image quotas set on your project. Check your current quota and verify the limits applied and storage in use: USD oc describe quota 13.2. Service certificate generation failure If your request for access to resources is denied: Issue If a service certificate generation fails with (service's service.beta.openshift.io/serving-cert-generation-error annotation contains): Example output secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60 Resolution The service that generated the certificate no longer exists, or has a different serviceUID . You must force certificates regeneration by removing the old secret, and clearing the following annotations on the service: service.beta.openshift.io/serving-cert-generation-error and service.beta.openshift.io/serving-cert-generation-error-num : USD oc delete secret <secret_name> USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error- USD oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num- Note The command removing annotation has a - after the annotation name to be removed.
|
[
"requested access to the resource is denied",
"oc describe quota",
"secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60",
"oc delete secret <secret_name>",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-",
"oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/builds/troubleshooting-builds_build-configuration
|
Chapter 5. Preparing Storage for Red Hat Virtualization
|
Chapter 5. Preparing Storage for Red Hat Virtualization You need to prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. Warning When installing or reinstalling the host's operating system, Red Hat strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) POSIX-compliant file system Local storage Red Hat Gluster Storage 5.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8. Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Prerequisites Install the NFS utils package. # dnf install nfs-utils -y To check the enabled versions: # cat /proc/fs/nfsd/versions Enable the following services: # systemctl enable nfs-server # systemctl enable rpcbind Procedure Create the group kvm : # groupadd kvm -g 36 Create the user vdsm in the group kvm : # useradd vdsm -u 36 -g kvm Create the storage directory and modify the access rights. Add the storage directory to /etc/exports with the relevant permissions. # vi /etc/exports # cat /etc/exports /storage *(rw) Restart the following services: # systemctl restart rpcbind # systemctl restart nfs-server To see which export are available for a specific IP address: # exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world> Note If changes in /etc/exports have been made after starting the services, the exportfs -ra command can be used to reload the changes. After performing all the above stages, the exports directory should be ready and can be tested on a different host to check that it is usable. 5.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8. Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } 5.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the vdsm-tool config-lvm-filter command to create filters for the LVM. See Creating an LVM filter Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } } 5.4. Preparing POSIX-compliant File System Storage POSIX file system support allows you to mount file systems using the same mount options that you would normally use when mounting them manually from the command line. This functionality is intended to allow access to storage not exposed using NFS, iSCSI, or FCP. Any POSIX-compliant file system used as a storage domain in Red Hat Virtualization must be a clustered file system, such as Global File System 2 (GFS2), and must support sparse files and direct I/O. The Common Internet File System (CIFS), for example, does not support direct I/O, making it incompatible with Red Hat Virtualization. For information on setting up and configuring POSIX-compliant file system storage, see Red Hat Enterprise Linux Global File System 2 . Important Do not mount NFS storage by creating a POSIX-compliant file system storage domain. Always create an NFS storage domain instead. 5.5. Preparing local storage On Red Hat Virtualization Host (RHVH), local storage should always be defined on a file system that is separate from / (root). Use a separate logical volume or disk, to prevent possible loss of data during upgrades. Procedure for Red Hat Enterprise Linux hosts On the host, create the directory to be used for the local storage: # mkdir -p /data/images Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): # chown 36:36 /data /data/images # chmod 0755 /data /data/images Procedure for Red Hat Virtualization Hosts Create the local storage on a logical volume: Create a local storage directory: # mkdir /data # lvcreate -L USDSIZE rhvh -n data # mkfs.ext4 /dev/mapper/rhvh-data # echo "/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2" >> /etc/fstab # mount /data Mount the new local storage: # mount -a Ensure that the directory has permissions allowing read/write access to the vdsm user (UID 36) and kvm group (GID 36): # chown 36:36 /data /rhvh-data # chmod 0755 /data /rhvh-data 5.6. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . 5.7. Customizing Multipath Configurations for SAN Vendors If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in /etc/multipath.conf . To override the multipath settings, do not customize /etc/multipath.conf . Because VDSM owns /etc/multipath.conf , installing or upgrading VDSM or Red Hat Virtualization can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures. Instead, you create a file in the /etc/multipath/conf.d directory that contains the settings you want to customize or override. VDSM executes the files in /etc/multipath/conf.d in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf . To avoid causing severe storage failures, follow these guidelines: Do not modify /etc/multipath.conf . If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems. Do not override the user_friendly_names and find_multipaths settings. For details, see Recommended Settings for Multipath.conf . Avoid overriding the no_path_retry and polling_interval settings unless a storage vendor specifically requires you to do so. For details, see Recommended Settings for Multipath.conf . Warning Not following these guidelines can cause catastrophic storage errors. Prerequisites VDSM is configured to use the multipath module. To verify this, enter: Procedure Create a new configuration file in the /etc/multipath/conf.d directory. Copy the individual setting you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/<my_device>.conf . Remove any comment marks, edit the setting values, and save your changes. Apply the new configuration settings by entering: Note Do not restart the multipathd service. Doing so generates errors in the VDSM logs. Verification steps Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections. Enable one connection at a time and verify that doing so makes the storage domain reachable. Additional resources Recommended Settings for Multipath.conf Red Hat Enterprise Linux DM Multipath Configuring iSCSI Multipathing How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? 5.8. Recommended Settings for Multipath.conf Do not override the following settings: user_friendly_names no Device names must be consistent across all hypervisors. For example, /dev/mapper/{WWID} . The default value of this setting, no , prevents the assignment of arbitrary and inconsistent device names such as /dev/mapper/mpath{N} on various hypervisors, which can lead to unpredictable system behavior. Warning Do not change this setting to user_friendly_names yes . User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported. find_multipaths no This setting controls whether RHVH tries to access devices through multipath only if more than one path is available. The current value, no , allows RHV to access devices through multipath even if only one path is available. Warning Do not override this setting. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.
|
[
"dnf install nfs-utils -y",
"cat /proc/fs/nfsd/versions",
"systemctl enable nfs-server systemctl enable rpcbind",
"groupadd kvm -g 36",
"useradd vdsm -u 36 -g kvm",
"mkdir /storage chmod 0755 /storage chown 36:36 /storage/",
"vi /etc/exports cat /etc/exports /storage *(rw)",
"systemctl restart rpcbind systemctl restart nfs-server",
"exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>",
"cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }",
"cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } }",
"mkdir -p /data/images",
"chown 36:36 /data /data/images chmod 0755 /data /data/images",
"mkdir /data lvcreate -L USDSIZE rhvh -n data mkfs.ext4 /dev/mapper/rhvh-data echo \"/dev/mapper/rhvh-data /data ext4 defaults,discard 1 2\" >> /etc/fstab mount /data",
"mount -a",
"chown 36:36 /data /rhvh-data chmod 0755 /data /rhvh-data",
"vdsm-tool is-configured --module multipath",
"systemctl reload multipathd"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/preparing_storage_for_rhv_sm_localdb_deploy
|
4.77. gnome-system-monitor
|
4.77. gnome-system-monitor 4.77.1. RHEA-2011:1612 - gnome-system-monitor enhancement update An enhanced gnome-system-monitor package that provides an enhancement is now available for Red Hat Enterprise Linux 6. The gnome-system-monitor package contains a tool which allows to graphically view and manipulate the running processes on the system. It also provides an overview of available resources such as CPU and memory. Enhancement BZ# 571597 Previously, the CPU History graph could be hard to read if it displayed large numbers of CPUs. This update modifies the design: scrollbars were added for easier manipulation of the window and random color is now generated to each CPU. Users of gnome-system-monitor are advised to upgrade to this updated package, which adds this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gnome-system-monitor
|
8.160. python
|
8.160. python 8.160.1. RHSA-2013:1582 - Moderate: python security, bug fix, and enhancement update Updated python packages that fix one security issue, several bugs, and add one enhancement are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which give detailed severity ratings, are available for each vulnerability from the CVE links associated with each description below. Python is an interpreted, interactive, object-oriented programming language. Security Fix CVE-2013-4238 A flaw was found in the way the Python SSL module handled X.509 certificate fields that contain a NULL byte. An attacker could potentially exploit this flaw to conduct man-in-the-middle attacks to spoof SSL servers. Note that to exploit this issue, an attacker would need to obtain a carefully crafted certificate signed by an authority that the client trusts. Bug Fixes BZ# 521898 Previously, several Python executables from the python-tools subpackage started with the #!/usr/bin/env python shebang. This made it harder to install and use alternative Python versions. With this update, the first line of these executables has been replaced with #!/usr/bin/python that explicitly refers to the system version of Python. As a result, a user-preferred version of Python can now be used without complications BZ# 841937 Prior to this update, the sqlite3.Cursor.lastrowid object did not accept an insert statement specified in the Turkish locale. Consequently, when installing Red Hat Enterprise Linux 6 with the graphical installer, selecting "Turkish" as the install language led to an installation failure. With this update, sqlite3.Cursor.lastrowid has been fixed and installation no longer fails under the Turkish locale. BZ# 845802 Previously, the SysLogHandler class inserted a UTF-8 byte order mark (BOM) into log messages. Consequently, these messages were evaluated as having the emergency priority level and were logged to all user consoles. With this update, SysLogHandler no longer appends a BOM to log messages, and messages are now assigned correct priority levels. BZ# 893034 Previously, the random.py script failed to import the random module when the /dev/urandom file did not exist on the system. This led subsequent programs, such as Yum , to terminate unexpectedly. This bug has been fixed, and random.py now works as expected even without /dev/urandom . BZ# 919163 The WatchedFileHandler class was sensitive to a race condition, which led to occasional errors. Consequently, rotating to a new log file failed. WatchedFileHandler has been fixed and the log rotation now works as expected. BZ# 928390 Prior to this update, Python did not read Alternative Subject Names from certain Secure Sockets Layer (SSL) certificates. Consequently, a false authentication failure could have occurred when checking the certificate host name. This update fixes the handling of Alternative Subject Names and false authentication errors no longer occur. BZ# 948025 Previously, the SocketServer module did not handle the system call interruption properly. This caused certain HTTP servers to terminate unexpectedly. With this update, SocketServer has been modified to handle the interruption and servers no longer crash in the aforementioned scenario. BZ# 958868 Passing the timeout=None argument to the subprocess.Popen() function caused the upstream version of the Eventlet library to terminate unexpectedly. This bug has been fixed and Eventlet no longer fails in the described case. BZ# 960168 When a connection incoming to a server with an enabled SSLSocket class failed to pass the automatic do_handshake() function, the connection remained open. This problem affected only Python 2 versions. The underlying source code has been fixed and the failed incoming connection is now closed properly. BZ# 962779 In cases when multiple libexpat.so libraries were available, Python failed to choose the correct one. This update adds an explicit RPATH to the _elementtree.so , thus fixing this bug. BZ# 978129 Previously, the urlparse module did not parse the query and fragment parts of URLs properly for arbitrary XML schemes. With this update, urlparse has been fixed and correct parsing is now assured in this scenario. Enhancement BZ# 929258 This update adds the collections.OrderedDict data structure to the collections package. collections.OrderedDict is used in application code to ensure that the in-memory python dictionaries are emitted in the same order when converted to a string by the json.dumps routines. All python users are advised to upgrade to these updated packages, which contain backported patches to correct these issues and add this enhancement.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/python
|
Installing Capsule Server
|
Installing Capsule Server Red Hat Satellite 6.15 Install and configure Capsule Red Hat Satellite Documentation Team [email protected]
|
[
"nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2",
"restorecon -R /var/lib/pulp",
"firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"",
"firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster",
"firewall-cmd --runtime-to-permanent",
"firewall-cmd --list-all",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-capsule-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-capsule:el8",
"dnf repolist enabled",
"dnf install fapolicyd",
"satellite-maintain packages install fapolicyd",
"systemctl enable --now fapolicyd",
"systemctl status fapolicyd",
"dnf upgrade",
"dnf install satellite-capsule",
"dnf install chrony",
"systemctl enable --now chronyd",
"mkdir /root/ capsule_cert",
"capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar /root/capsule_cert/ capsule.example.com -certs.tar",
"output omitted satellite-installer --scenario capsule --certs-tar-file \"/root/capsule_cert/ capsule.example.com -certs.tar\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" s97QxvUAgFNAQZNGg4F9zLq2biDsxM7f \" --foreman-proxy-oauth-consumer-secret \" 6bpzAdMpRAfYaVZtaepYetomgBVQ6ehY \"",
"scp /root/capsule_cert/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"mkdir /root/capsule_cert",
"openssl genrsa -out /root/capsule_cert/capsule_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = capsule.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = capsule.example.com",
"[req_distinguished_name] CN = capsule.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4",
"openssl req -new -key /root/capsule_cert/capsule_cert_key.pem \\ 1 -config /root/capsule_cert/openssl.cnf \\ 2 -out /root/capsule_cert/capsule_cert_csr.pem 3",
"capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar ~/ capsule.example.com -certs.tar --server-cert /root/ capsule_cert/capsule_cert.pem \\ 1 --server-key /root/ capsule_cert/capsule_cert_key.pem \\ 2 --server-ca-cert /root/ capsule_cert/ca_cert_bundle.pem 3",
"output omitted satellite-installer --scenario capsule --certs-tar-file \"/root/ capsule.example.com -certs.tar\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" My_OAuth_Consumer_Key \" --foreman-proxy-oauth-consumer-secret \" My_OAuth_Consumer_Secret \"",
"scp ~/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"dnf install http:// capsule.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"satellite-installer --foreman-trusted-proxies \"127.0.0.1/8\" --foreman-trusted-proxies \"::1\" --foreman-trusted-proxies \" My_IP_address \" --foreman-trusted-proxies \" My_IP_range \"",
"satellite-installer --full-help | grep -A 2 \"trusted-proxies\"",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt",
"firewall-cmd --add-service=mqtt",
"firewall-cmd --runtime-to-permanent",
"satellite-installer --enable-foreman-proxy-plugin-openscap --foreman-proxy-plugin-openscap-ansible-module true --foreman-proxy-plugin-openscap-puppet-module true",
"hammer capsule list",
"hammer capsule info --id My_capsule_ID",
"hammer capsule content available-lifecycle-environments --id My_capsule_ID",
"hammer capsule content add-lifecycle-environment --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID --organization \" My_Organization \"",
"hammer capsule content synchronize --id My_capsule_ID",
"hammer capsule content synchronize --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID",
"hammer capsule content synchronize --id My_capsule_ID --skip-metadata-check true",
"satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"",
"satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3",
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"dnf install dhcp-server bind-utils",
"tsig-keygen -a hmac-md5 omapi_key",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp",
"firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl enable --now dhcpd",
"dnf install nfs-utils systemctl enable --now nfs-server",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp",
"firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule\\047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true",
"satellite-maintain packages install bind-utils",
"dnssec-keygen -r /dev/urandom -a HMAC-MD5 -b 512 -n HOST omapi_key cat Komapi_key.+*.private | grep ^Key|cut -d ' ' -f2-",
"satellite-installer --foreman-proxy-dhcp-key-name \" My_Name \" --foreman-proxy-dhcp-key-secret \" My_Secret \"",
"dnf module list --enabled",
"dnf module list --enabled",
"dnf module reset ruby",
"dnf module list --enabled",
"dnf module reset postgresql",
"dnf module enable satellite-capsule:el8",
"dnf install postgresql-upgrade",
"postgresql-setup --upgrade"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/installing_capsule_server/index
|
Chapter 2. Using Control Groups
|
Chapter 2. Using Control Groups The following sections provide an overview of tasks related to creation and management of control groups. This guide focuses on utilities provided by systemd that are preferred as a way of cgroup management and will be supported in the future. versions of Red Hat Enterprise Linux used the libcgroup package for creating and managing cgroups. This package is still available to assure backward compatibility (see Warning ), but it will not be supported in future versions of Red Hat Enterprise Linux. 2.1. Creating Control Groups From the systemd 's perspective, a cgroup is bound to a system unit configurable with a unit file and manageable with systemd's command-line utilities. Depending on the type of application, your resource management settings can be transient or persistent . To create a transient cgroup for a service, start the service with the systemd-run command. This way, it is possible to set limits on resources consumed by the service during its runtime. Applications can create transient cgroups dynamically by using API calls to systemd . See the section called "Online Documentation" for API reference. Transient unit is removed automatically as soon as the service is stopped. To assign a persistent cgroup to a service, edit its unit configuration file. The configuration is preserved after the system reboot, so it can be used to manage services that are started automatically. Note that scope units cannot be created in this way. 2.1.1. Creating Transient Cgroups with systemd-run The systemd-run command is used to create and start a transient service or scope unit and run a custom command in the unit. Commands executed in service units are started asynchronously in the background, where they are invoked from the systemd process. Commands run in scope units are started directly from the systemd-run process and thus inherit the execution environment of the caller. Execution in this case is synchronous. To run a command in a specified cgroup, type as root : The name stands for the name you want the unit to be known under. If --unit is not specified, a unit name will be generated automatically. It is recommended to choose a descriptive name, since it will represent the unit in the systemctl output. The name has to be unique during runtime of the unit. Use the optional --scope parameter to create a transient scope unit instead of service unit that is created by default. With the --slice option, you can make your newly created service or scope unit a member of a specified slice. Replace slice_name with the name of an existing slice (as shown in the output of systemctl -t slice ), or create a new slice by passing a unique name. By default, services and scopes are created as members of the system.slice . Replace command with the command you wish to execute in the service unit. Place this command at the very end of the systemd-run syntax, so that the parameters of this command are not confused for parameters of systemd-run . Besides the above options, there are several other parameters available for systemd-run . For example, --description creates a description of the unit, --remain-after-exit allows to collect runtime information after terminating the service's process. The --machine option executes the command in a confined container. See the systemd-run (1) manual page to learn more. Example 2.1. Starting a New Service with systemd-run Use the following command to run the top utility in a service unit in a new slice called test . Type as root : The following message is displayed to confirm that you started the service successfully: Now, the name toptest.service can be used to monitor or to modify the cgroup with systemctl commands. 2.1.2. Creating Persistent Cgroups To configure a unit to be started automatically on system boot, execute the systemctl enable command (see the chapter called Managing Services with systemd in Red Hat Enterprise Linux 7 System Administrators Guide ). Running this command automatically creates a unit file in the /usr/lib/systemd/system/ directory. To make persistent changes to the cgroup, add or modify configuration parameters in its unit file. For more information, see Section 2.3.2, "Modifying Unit Files" .
|
[
"~]# systemd-run --unit= name --scope --slice= slice_name command",
"~]# systemd-run --unit= toptest --slice= test top -b",
"Running as unit toptest.service"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/chap-using_control_groups
|
Part I. Deploying a Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators
|
Part I. Deploying a Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators As a system engineer, you can deploy a Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 to provide an infrastructure to develop or execute services, process applications, and other business assets. You can use OpenShift Operators to deploy the environment defined in a structured YAML file and to maintain and modify this environment as necessary. Prerequisites A Red Hat OpenShift Container Platform 4 environment is available. For the exact versions of Red Hat OpenShift Container Platform that the current release supports, see Red Hat Process Automation Manager 7 Supported Configurations . The OpenShift project for the deployment is created. You are logged into the project using the OpenShift web console. The following resources are available on the OpenShift cluster. Depending on the application load, higher resource allocation might be necessary for acceptable performance. For an authoring environment, 4 gigabytes of memory and 2 virtual CPU cores for the Business Central pod. In a high-availability deployment, these resources are required for each replica and two replicas are created by default. For a production or immutable environment, 2 gigabytes of memory and 1 virtual CPU core for each replica of the Business Central Monitoring pod. 2 gigabytes of memory and 1 virtual CPU core for each replica of each KIE Server pod. 1 gigabyte of memory and half a virtual CPU core for each replica of a Smart Router pod. In a high-availability authoring deployment, additional resources according to the configured defaults are required for the MySQL, Red Hat AMQ, and Red Hat Data Grid pods. Note The default values for MaxMetaspaceSize are: Business Central images: 1024m KIE Server images: 512m For other images: 256m Dynamic persistent volume (PV) provisioning is enabled. Alternatively, if dynamic PV provisioning is not enabled, enough persistent volumes must be available. By default, the deployed components require the following PV sizes: Each KIE Server deployment by default requires one 1Gi PV for the database. You can change the database PV size. You can deploy multiple KIE Servers; each requires a separate database PV. This requirement does not apply if you use an external database server. By default, Business Central requires one 1Gi PV. You can change the PV size for Business Central persistent storage. Business Central Monitoring requires one 64Mi PV. Smart Router requires one 64Mi PV. If you intend to deploy a high-availability authoring environment or any environment with Business Central Monitoring pods, your OpenShift environment supports persistent volumes with ReadWriteMany mode. If your environment does not support this mode, you can use NFS to provision the volumes. For information about access mode support in OpenShift public and dedicated clouds, see Access Modes in Red Hat OpenShift Container Platform documentation.
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/deploying_red_hat_process_automation_manager_on_red_hat_openshift_container_platform/assembly-openshift-operator_deploying-on-openshift
|
Chapter 6. Using custom rule categories
|
Chapter 6. Using custom rule categories You can create custom rule categories and assign MTR rules to them. Note Although MTR processes rules with the legacy severity field, you must update your custom rules to use the new category-id field. 6.1. Adding a custom category You can add a custom category to the rule category file. Procedure Edit the rule category file, which is located at <MTR_HOME>/rules/migration-core/core.windup.categories.xml . Add a new <category> element and fill in the following parameters: id : The ID that MTR rules use to reference the category. priority : The sorting priority relative to other categories. The category with the lowest value is displayed first. name : The display name of the category. description : The description of the category. Custom rule category example <?xml version="1.0"?> <categories> ... <category id="custom-category" priority="20000"> <name>Custom Category</name> <description>This is a custom category.</description> </category> </categories> This category is ready to be referenced by MTR rules. 6.2. Assigning a rule to a custom category You can assign a rule to your new custom category. Procedure In your MTR rule, update the category-id field as in the following example. <rule id="rule-id"> <when> ... </when> <perform> <hint title="Rule Title" effort="1" category-id="custom-category"> <message>Hint message.</message> </hint> </perform> </rule> If this rule condition is met, incidents identified by this rule use your custom category. The custom category is displayed on the dashboard and in the Issues report. Figure 6.1. Custom category on the dashboard
|
[
"<?xml version=\"1.0\"?> <categories> <category id=\"custom-category\" priority=\"20000\"> <name>Custom Category</name> <description>This is a custom category.</description> </category> </categories>",
"<rule id=\"rule-id\"> <when> </when> <perform> <hint title=\"Rule Title\" effort=\"1\" category-id=\"custom-category\"> <message>Hint message.</message> </hint> </perform> </rule>"
] |
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/rules_development_guide/rule-categories_rules-development-guide-mtr
|
Release notes
|
Release notes Red Hat Advanced Cluster Management for Kubernetes 2.12 Release notes
| null |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/release_notes/index
|
Chapter 3. Upgrading from RHEL 6.10 to RHEL 7.9
|
Chapter 3. Upgrading from RHEL 6.10 to RHEL 7.9 The in-place upgrade from RHEL 6 to RHEL 7 consists of two major stages, a pre-upgrade assessment of the system, and the actual in-place upgrade: In the pre-upgrade phase, the Preupgrade Assistant collects information from the system, analyzes it, and suggests possible corrective actions. The Preupgrade Assistant does not make any changes to your system. In the in-place upgrade phase, the Red Hat Upgrade Tool installs RHEL 7 packages and adjusts basic configuration where possible. To perform an in-place upgrade from RHEL 6 to RHEL 7: Assess the upgradability of your system using the Preupgrade Assistant, and fix problems identified in the report before you proceed with the upgrade. For detailed instructions, see the Assessing upgrade suitability section in the Upgrading from RHEL 6 to RHEL 7 documentation. Use the Red Hat Upgrade Tool to upgrade to RHEL 7.9. For a detailed procedure, see the Upgrading your system from RHEL 6 to RHEL 7 section in the Upgrading from RHEL 6 to RHEL 7 documentation.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_6_to_rhel_8/upgrading-from-rhel-6-10-to-rhel-7-9_upgrading-from-rhel-6-to-rhel-8
|
Chapter 131. Hazelcast Component
|
Chapter 131. Hazelcast Component Available as of Camel version 2.7 The hazelcast- component allows you to work with the Hazelcast distributed data grid / cache. Hazelcast is a in memory data grid, entirely written in Java (single jar). It offers a great palette of different data stores like map, multi map (same key, n values), queue, list and atomic number. The main reason to use Hazelcast is its simple cluster support. If you have enabled multicast on your network you can run a cluster with hundred nodes with no extra configuration. Hazelcast can simply configured to add additional features like n copies between nodes (default is 1), cache persistence, network configuration (if needed), near cache, enviction and so on. For more information consult the Hazelcast documentation on http://www.hazelcast.com/docs.jsp . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hazelcast</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 131.1. Hazelcast components See followings for each component usage: * map * multimap * queue * topic * list * seda * set * atomic number * cluster support (instance) * replicatedmap * ringbuffer 131.2. Using hazelcast reference 131.2.1. By its name <bean id="hazelcastLifecycle" class="com.hazelcast.core.LifecycleService" factory-bean="hazelcastInstance" factory-method="getLifecycleService" destroy-method="shutdown" /> <bean id="config" class="com.hazelcast.config.Config"> <constructor-arg type="java.lang.String" value="HZ.INSTANCE" /> </bean> <bean id="hazelcastInstance" class="com.hazelcast.core.Hazelcast" factory-method="newHazelcastInstance"> <constructor-arg type="com.hazelcast.config.Config" ref="config"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="testHazelcastInstanceBeanRefPut"> <from uri="direct:testHazelcastInstanceBeanRefPut"/> <setHeader headerName="CamelHazelcastOperationType"> <constant>put</constant> </setHeader> <to uri="hazelcast-map:testmap?hazelcastInstanceName=HZ.INSTANCE"/> </route> <route id="testHazelcastInstanceBeanRefGet"> <from uri="direct:testHazelcastInstanceBeanRefGet" /> <setHeader headerName="CamelHazelcastOperationType"> <constant>get</constant> </setHeader> <to uri="hazelcast-map:testmap?hazelcastInstanceName=HZ.INSTANCE"/> <to uri="seda:out" /> </route> </camelContext> 131.2.2. By instance <bean id="hazelcastInstance" class="com.hazelcast.core.Hazelcast" factory-method="newHazelcastInstance" /> <bean id="hazelcastLifecycle" class="com.hazelcast.core.LifecycleService" factory-bean="hazelcastInstance" factory-method="getLifecycleService" destroy-method="shutdown" /> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="testHazelcastInstanceBeanRefPut"> <from uri="direct:testHazelcastInstanceBeanRefPut"/> <setHeader headerName="CamelHazelcastOperationType"> <constant>put</constant> </setHeader> <to uri="hazelcast-map:testmap?hazelcastInstance=#hazelcastInstance"/> </route> <route id="testHazelcastInstanceBeanRefGet"> <from uri="direct:testHazelcastInstanceBeanRefGet" /> <setHeader headerName="CamelHazelcastOperationType"> <constant>get</constant> </setHeader> <to uri="hazelcast-map:testmap?hazelcastInstance=#hazelcastInstance"/> <to uri="seda:out" /> </route> </camelContext> 131.3. Publishing hazelcast instance as an OSGI service If operating in an OSGI container and you would want to use one instance of hazelcast across all bundles in the same container. You can publish the instance as an OSGI service and bundles using the cache al need is to reference the service in the hazelcast endpoint. 131.3.1. Bundle A create an instance and publishes it as an OSGI service <bean id="config" class="com.hazelcast.config.FileSystemXmlConfig"> <argument type="java.lang.String" value="USD{hazelcast.config}"/> </bean> <bean id="hazelcastInstance" class="com.hazelcast.core.Hazelcast" factory-method="newHazelcastInstance"> <argument type="com.hazelcast.config.Config" ref="config"/> </bean> <!-- publishing the hazelcastInstance as a service --> <service ref="hazelcastInstance" interface="com.hazelcast.core.HazelcastInstance" /> 131.3.2. Bundle B uses the instance <!-- referencing the hazelcastInstance as a service --> <reference ref="hazelcastInstance" interface="com.hazelcast.core.HazelcastInstance" /> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route id="testHazelcastInstanceBeanRefPut"> <from uri="direct:testHazelcastInstanceBeanRefPut"/> <setHeader headerName="CamelHazelcastOperationType"> <constant>put</constant> </setHeader> <to uri="hazelcast-map:testmap?hazelcastInstance=#hazelcastInstance"/> </route> <route id="testHazelcastInstanceBeanRefGet"> <from uri="direct:testHazelcastInstanceBeanRefGet" /> <setHeader headerName="CamelHazelcastOperationType"> <constant>get</constant> </setHeader> <to uri="hazelcast-map:testmap?hazelcastInstance=#hazelcastInstance"/> <to uri="seda:out" /> </route> </camelContext>
|
[
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hazelcast</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"<bean id=\"hazelcastLifecycle\" class=\"com.hazelcast.core.LifecycleService\" factory-bean=\"hazelcastInstance\" factory-method=\"getLifecycleService\" destroy-method=\"shutdown\" /> <bean id=\"config\" class=\"com.hazelcast.config.Config\"> <constructor-arg type=\"java.lang.String\" value=\"HZ.INSTANCE\" /> </bean> <bean id=\"hazelcastInstance\" class=\"com.hazelcast.core.Hazelcast\" factory-method=\"newHazelcastInstance\"> <constructor-arg type=\"com.hazelcast.config.Config\" ref=\"config\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"testHazelcastInstanceBeanRefPut\"> <from uri=\"direct:testHazelcastInstanceBeanRefPut\"/> <setHeader headerName=\"CamelHazelcastOperationType\"> <constant>put</constant> </setHeader> <to uri=\"hazelcast-map:testmap?hazelcastInstanceName=HZ.INSTANCE\"/> </route> <route id=\"testHazelcastInstanceBeanRefGet\"> <from uri=\"direct:testHazelcastInstanceBeanRefGet\" /> <setHeader headerName=\"CamelHazelcastOperationType\"> <constant>get</constant> </setHeader> <to uri=\"hazelcast-map:testmap?hazelcastInstanceName=HZ.INSTANCE\"/> <to uri=\"seda:out\" /> </route> </camelContext>",
"<bean id=\"hazelcastInstance\" class=\"com.hazelcast.core.Hazelcast\" factory-method=\"newHazelcastInstance\" /> <bean id=\"hazelcastLifecycle\" class=\"com.hazelcast.core.LifecycleService\" factory-bean=\"hazelcastInstance\" factory-method=\"getLifecycleService\" destroy-method=\"shutdown\" /> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"testHazelcastInstanceBeanRefPut\"> <from uri=\"direct:testHazelcastInstanceBeanRefPut\"/> <setHeader headerName=\"CamelHazelcastOperationType\"> <constant>put</constant> </setHeader> <to uri=\"hazelcast-map:testmap?hazelcastInstance=#hazelcastInstance\"/> </route> <route id=\"testHazelcastInstanceBeanRefGet\"> <from uri=\"direct:testHazelcastInstanceBeanRefGet\" /> <setHeader headerName=\"CamelHazelcastOperationType\"> <constant>get</constant> </setHeader> <to uri=\"hazelcast-map:testmap?hazelcastInstance=#hazelcastInstance\"/> <to uri=\"seda:out\" /> </route> </camelContext>",
"<bean id=\"config\" class=\"com.hazelcast.config.FileSystemXmlConfig\"> <argument type=\"java.lang.String\" value=\"USD{hazelcast.config}\"/> </bean> <bean id=\"hazelcastInstance\" class=\"com.hazelcast.core.Hazelcast\" factory-method=\"newHazelcastInstance\"> <argument type=\"com.hazelcast.config.Config\" ref=\"config\"/> </bean> <!-- publishing the hazelcastInstance as a service --> <service ref=\"hazelcastInstance\" interface=\"com.hazelcast.core.HazelcastInstance\" />",
"<!-- referencing the hazelcastInstance as a service --> <reference ref=\"hazelcastInstance\" interface=\"com.hazelcast.core.HazelcastInstance\" /> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route id=\"testHazelcastInstanceBeanRefPut\"> <from uri=\"direct:testHazelcastInstanceBeanRefPut\"/> <setHeader headerName=\"CamelHazelcastOperationType\"> <constant>put</constant> </setHeader> <to uri=\"hazelcast-map:testmap?hazelcastInstance=#hazelcastInstance\"/> </route> <route id=\"testHazelcastInstanceBeanRefGet\"> <from uri=\"direct:testHazelcastInstanceBeanRefGet\" /> <setHeader headerName=\"CamelHazelcastOperationType\"> <constant>get</constant> </setHeader> <to uri=\"hazelcast-map:testmap?hazelcastInstance=#hazelcastInstance\"/> <to uri=\"seda:out\" /> </route> </camelContext>"
] |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hazelcast_component
|
Chapter 8. System Schemas and Procedures
|
Chapter 8. System Schemas and Procedures 8.1. System Schemas The built-in SYS and SYSADMIN schemas provide metadata tables and procedures against the current virtual database.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/chap-system_schemas_and_procedures
|
Chapter 1. Cluster notifications
|
Chapter 1. Cluster notifications Cluster notifications are messages about the status, health, or performance of your cluster. Cluster notifications are the primary way that Red Hat Site Reliability Engineering (SRE) communicates with you about the health of your managed cluster. SRE may also use cluster notifications to prompt you to perform an action in order to resolve or prevent an issue with your cluster. Cluster owners and administrators must regularly review and action cluster notifications to ensure clusters remain healthy and supported. You can view cluster notifications in the Red Hat Hybrid Cloud Console, in the Cluster history tab for your cluster. By default, only the cluster owner receives cluster notifications as emails. If other users need to receive cluster notification emails, add each user as a notification contact for your cluster. 1.1. Additional resources Customer responsibilities: Review and action cluster notifications Cluster notification emails Troubleshooting: Cluster notifications 1.2. What to expect from cluster notifications As a cluster administrator, you need to be aware of when and why cluster notifications are sent, as well as their types and severity levels, in order to effectively understand the health and administration needs of your cluster. 1.2.1. Cluster notification policy Cluster notifications are designed to keep you informed about the health of your cluster and high impact events that affect it. Most cluster notifications are generated and sent automatically to ensure that you are immediately informed of problems or important changes to the state of your cluster. In certain situations, Red Hat Site Reliability Engineering (SRE) creates and sends cluster notifications to provide additional context and guidance for a complex issue. Cluster notifications are not sent for low-impact events, low-risk security updates, routine operations and maintenance, or minor, transient issues that are quickly resolved by SRE. Red Hat services automatically send notifications when: Remote health monitoring or environment verification checks detect an issue in your cluster, for example, when a worker node has low disk space. Significant cluster life cycle events occur, for example, when scheduled maintenance or upgrades begin, or cluster operations are impacted by an event, but do not require customer intervention. Significant cluster management changes occur, for example, when cluster ownership or administrative control is transferred from one user to another. Your cluster subscription is changed or updated, for example, when Red Hat makes updates to subscription terms or features available to your cluster. SRE creates and sends notifications when: An incident results in a degradation or outage that impacts your cluster's availability or performance, for example, your cloud provider has a regional outage. SRE sends subsequent notifications to inform you of incident resolution progress, and when the incident is resolved. A security vulnerability, security breach, or unusual activity is detected on your cluster. Red Hat detects that changes you have made are creating or may result in cluster instability. Red Hat detects that your workloads are causing performance degradation or instability in your cluster. 1.2.2. Cluster notification severity levels Each cluster notification has an associated severity level to help you identify notifications with the greatest impact to your business. You can filter cluster notifications according to these severity levels in the Red Hat Hybrid Cloud Console, in the Cluster history tab for your cluster. Red Hat uses the following severity levels for cluster notifications, from most to least severe: Critical Immediate action is required. One or more key functions of a service or cluster is not working, or will stop working soon. A critical alert is important enough to page on-call staff and interrupt regular workflows. Major Immediate action is strongly recommended. One or more key functions of the cluster will soon stop working. A major issue may lead to a critical issue if it is not addressed in a timely manner. Warning Action is required as soon as possible. One or more key functions of the cluster are not working optimally and may degrade further, but do not pose an immediate danger to the functioning of the cluster. Info No action necessary. This severity does not describe problems that need to be addressed, only important information about meaningful or important life cycle, service, or cluster events. Debug No action necessary. Debug notifications provide low-level information about less important lifecycle, service, or cluster events to aid in debugging unexpected behavior. 1.2.3. Cluster notification types Each cluster notification has an associated notification type to help you identify notifications that are relevant to your role and responsibilities. You can filter cluster notifications according to these types in the Red Hat Hybrid Cloud Console, in the Cluster history tab for your cluster. Red Hat uses the following notification types to indicate notification relevance. Capacity management Notifications for events related to updating, creating, or deleting node pools, machine pools, compute replicas or quotas (load balancer, storage, etc.). Cluster access Notifications for events related to adding or deleting groups, roles or identity providers, for example, when SRE cannot access your cluster because STS credentials have expired, when there is a configuration problem with your AWS roles, or when you add or remove identity providers. Cluster add-ons Notifications for events related to add-on management or upgrade maintenance for add-ons, for example, when an add-on is installed, upgraded, or removed, or cannot be installed due to unmet requirements. Cluster configuration Notifications for cluster tuning events, workload monitoring, and inflight checks. Cluster lifecycle Notifications for cluster or cluster resource creation, deletion, and registration, or change in cluster or resource status (for example, ready or hibernating). Cluster networking Notifications related to cluster networking, including HTTP/S proxy, router, and ingress state. Cluster ownership Notifications related to cluster ownership transfer from one user to another. Cluster scaling Notifications related to updating, creating, or deleting node pools, machine pools, compute replicas or quota. Cluster security Events related to cluster security, for example, an increased number of failed access attempts, updates to trust bundles, or software updates with security impact. Cluster subscription Cluster expiration, trial cluster notifications, or switching from free to paid. Cluster updates Anything relating to upgrades, such as upgrade maintenance or enablement. Customer support Updates on support case status. General notification The default notification type. This is only used for notifications that do not have a more specific category. 1.3. Viewing cluster notifications using the Red Hat Hybrid Cloud Console Cluster notifications provide important information about the health of your cluster. You can view notifications that have been sent to your cluster in the Cluster history tab on the Red Hat Hybrid Cloud Console. Prerequisites You are logged in to the Hybrid Cloud Console. Procedure Navigate to the Clusters page of the Hybrid Cloud Console. Click the name of your cluster to go to the cluster details page. Click the Cluster history tab. Cluster notifications appear under the Cluster history heading. Optional: Filter for relevant cluster notifications Use the filter controls to hide cluster notifications that are not relevant to you, so that you can focus on your area of expertise or on resolving a critical issue. You can filter notifications based on text in the notification description, severity level, notification type, when the notification was received, and which system or person triggered the notification. 1.4. Cluster notification emails By default, when a cluster notification is sent to the cluster, it is also sent as an email to the cluster owner. You can configure additional recipients for notification emails to ensure that all appropriate users remain informed about the state of the cluster. 1.4.1. Adding notification contacts to your cluster Notification contacts receive emails when cluster notifications are sent to the cluster. By default, only the cluster owner receives cluster notification emails. You can configure other cluster users as additional notification contacts in your cluster support settings. Prerequisites Your cluster is deployed and registered to the Red Hat Hybrid Cloud Console. You are logged in to the Hybrid Cloud Console as the cluster owner or as a user with the cluster editor role. The intended notification recipient has a Red Hat Customer Portal account associated with the same organization as the cluster owner. Procedure Navigate to the Clusters page of the Hybrid Cloud Console. Click the name of your cluster to go to the cluster details page. Click the Support tab. On the Support tab, find the Notification contacts section. Click Add notification contact . In the Red Hat username or email field, enter the email address or the user name of the new recipient. Click Add contact . Verification steps The "Notification contact added successfully" message displays. Troubleshooting The Add notification contact button is disabled This button is disabled for users who do not have permission to add a notification contact. Log in to an account with the cluster owner, cluster editor, or cluster administrator role and try again. Error: Could not find any account identified by <username> or <email-address> This error occurs when the intended notification recipient is not part of the same Red Hat account organization as the cluster owner. Contact your organization administrator to add the intended recipient to the relevant organization and try again. 1.4.2. Removing notification contacts from your cluster Notification contacts receive emails when cluster notifications are sent to the cluster. You can remove notification contacts in your cluster support settings to prevent them from receiving notification emails. Prerequisites Your cluster is deployed and registered to the Red Hat Hybrid Cloud Console. You are logged in to the Hybrid Cloud Console as the cluster owner or as a user with the cluster editor role. Procedure Navigate to the Clusters page of the Hybrid Cloud Console. Click the name of your cluster to go to the cluster details page. Click the Support tab. On the Support tab, find the Notification contacts section. Click the options menu ( ⚙ ) beside the recipient you want to remove. Click Delete . Verification steps The "Notification contact deleted successfully" message displays. 1.5. Troubleshooting If you are not receiving cluster notification emails Ensure that emails sent from @redhat.com addresses are not filtered out of your email inbox. Ensure that your correct email address is listed as a notification contact for the cluster. Ask the cluster owner or administrator to add you as a notification contact: Cluster notification emails . If your cluster does not receive notifications Ensure that your cluster can access resources at api.openshift.com . Ensure that your firewall is configured according to the documented prerequisites: AWS firewall prerequisites
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/cluster_administration/rosa-cluster-notifications
|
Chapter 2. Eclipse Temurin features
|
Chapter 2. Eclipse Temurin features Eclipse Temurin does not contain structural changes from the upstream distribution of OpenJDK. For the list of changes and security fixes included in the latest OpenJDK 11 release of Eclipse Temurin, see OpenJDK 11.0.16 Released . New features and enhancements Review the following release notes to understand new features and feature enhancements included with the Eclipse Temurin 11.0.16 release: Vector throws ClassNotFoundException for a missing class of an element When the class of a Vector element is not found, java.util.Vector now correctly reports the ClassNotFoundException that occurs during deserialization by using java.io.ObjectInputStream.GetField.get(name, object) . Previously, a StreamCorruptedException error was displayed, which didn't provide any information about the missing class. See, JDK-8277157 (JDK Bug System) HTTPS channel binding support for Java Generic Security Services (GSS) or Kerberos The OpenJDK 11.0.16 release supports TLS channel binding tokens when Negotiate selects Kerberos authentication over HTTPS through javax.net.HttpsURLConnection . Channel binding tokens enhance security by mitigating some man-in-the-middle (MITM) attacks. When a server receives details regarding the binding between a TLS server certificate and authentication credentials for a client, the server detects if a MITM attack has fooled the client and can shut down the connection. The feature is controlled through the jdk.https.negotiate.cbt system property, which is described fully in Oracle documentation . See, JDK-8285240 (JDK Bug System) Incorrect handling of quoted arguments in ProcessBuilder Before the OpenJDK 11.0.16 release, arguments to ProcessBuilder on Windows that started with a double quotation mark and ended with a backslash followed by a double quotation mark passed to a command incorrectly, causing the command to fail. For example, the argument "C:\\Program Files\" , was processed as having extra double quotation marks at the end. The OpenJDK 11.0.16 release resolves this issue by restoring the previously available behavior, in which the backslash (\) before the final double quotation mark is not treated specially. See, JDK-8283137 (JDK Bug System) Default JDK compressor closes when IOException is encountered The DeflaterOutputStream.close() and GZIPOutputStream.finish() methods have been modified to close out the associated default JDK compressor before propagating a Throwable class up the stack. The ZIPOutputStream.closeEntry() method has been modified to close out the associated default JDK compressor before propagating an IOException , not of type ZipException , up the stack. See, JDK-8278386 (JDK Bug System) New system property to disable Windows Alternate Data Stream support in java.io.File The Windows implementation of java.io.File allows access to NTFS Alternate Data Streams (ADS) by default. These streams are structured in the format filename:streamname . The OpenJDK 11.0.16 release adds a system property that allows you to disable ADS support in java.io.File . To disable ADS support in java.io.File , set the jdk.io.File.enableADS system property to false . Important Disabling ADS support in java.io.File results in stricter path checking that prevents the use of special devices such as NUL: . See, JDK-8285660 (JDK Bug System) Revised on 2024-05-09 16:45:51 UTC
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.16/openjdk-temurin-features-11-0-16_openjdk
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.