title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
βŒ€
url
stringlengths
79
342
5.3.4. Adding Physical Volumes to a Volume Group
5.3.4. Adding Physical Volumes to a Volume Group To add additional physical volumes to an existing volume group, use the vgextend command. The vgextend command increases a volume group's capacity by adding one or more free physical volumes. The following command adds the physical volume /dev/sdf1 to the volume group vg1 .
[ "vgextend vg1 /dev/sdf1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/vg_grow
Chapter 32. Defining SELinux User Maps
Chapter 32. Defining SELinux User Maps Security-enhanced Linux (SELinux) sets rules over what system users can access processes, files, directories, and system settings. Both the system administrator and system applications can define security contexts that restrict or allow access from other applications. As part of defining centralized security policies in the Identity Management domain, Identity Management provides a way to map IdM users to existing SELinux user contexts and grant or restrict access to clients and services within the IdM domain, per host, based on the defined SELinux policies. 32.1. About Identity Management, SELinux, and Mapping Users Identity Management does not create or modify the SELinux contexts on a system. Rather, it uses strings that might match existing contexts on the target hosts as the basis for mapping IdM users in the domain to SELinux users on a system. Security-enhanced Linux defines kernel-level, mandatory access controls for how processes can interact with other resources on a system. Based on the expected behavior of processes on the system, and on their security implications, specific rules called policies are set. This is in contrast to higher-level discretionary access controls which are concerned primarily with file ownership and user identity. Every resource on a system is assigned a context. Resources include users, applications, files, and processes. System users are associated with an SELinux role . The role is assigned both a multilayer security context (MLS) and a multi-category security context (MCS). The MLS and MCS contexts confine users so that they can only access certain processes, files, and operations on the system. To get the full list of available SELinux users: For more information about SELinux in Red Hat Enterprise Linux, see Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide . SELinux users and policies function at the system level, not the network level. This means that SELinux users are configured independently on each system. While this is acceptable in many situations, as SELinux has common defined system users and SELinux-aware services define their own policies, it causes problems when remote users and systems access local resources. Remote users and services can be assigned a default guest context without knowing what their actual SELinux user and role should be. Identity Management can integrate an identity domain with local SELinux services. Identity Management can map IdM users to configured SELinux roles per host, per host group , or based on an HBAC rule . Mapping SELinux and IdM users improves user administration: Remote users can be granted appropriate SELinux user contexts based on their IdM group assignments. This also allows administrators to consistently apply the same policies to the same users without having to create local accounts or reconfigure SELinux. The SELinux context associated with a user is centralized. SELinux policies can be planned and related to domain-wide security policies through settings like IdM host-based access control rules. Administrators gain environment-wide visibility and control over how users and systems are assigned in SELinux. An SELinux user map defines two separate relationships that exist between three parts: the SELinux user for the system, an IdM user, and an IdM host. First, the SELinux user map defines a relationship between the SELinux user and the IdM host (the local or target system). Second, it defines a relationship between the SELinux user and the IdM user. This arrangement allows administrators to set different SELinux users for the same IdM users, depending on which host they are accessing. The core of an SELinux mapping rule is the SELinux system user. Each map is first associated with an SELinux user. The SELinux users which are available for mapping are configured in the IdM server, so there is a central and universal list. In this way, IdM defines a set of SELinux users it knows about and can associate with an IdM user upon login. By default, these are: unconfined_u (also used as a default for IdM users) guest_u xguest_u user_u staff_u However, this default list can be modified and any native SELinux user (see Section 32.1, "About Identity Management, SELinux, and Mapping Users" ) can be added or removed from the central IdM SELinux users list. In the IdM server configuration, each SELinux user is configured with not only its user name but also its MLS and MCS range, SELinux_user:MLS[:MCS] . The IPA server uses this format to identify the SELinux user when configuring maps. The IdM user and host configuration is very flexible. Users and hosts can be explicitly and individually assigned to an SELinux user map, or user groups or host groups can be explicitly assigned to the map. You can also associate SELinux mapping rules with host-based access control rules to make administration easier, to avoid duplicating the same rule in two places, and to keep the rules synchronized. As long as the host-based access control rule defines a user and a host, you can use it for an SELinux user map. Host-based access control rules (described in Chapter 31, Configuring Host-Based Access Control ) help integrate SELinux user maps with other access controls in IdM and can help limit or allow host-based user access for remote users, as well as define local security contexts. Note If a host-based access control rule is associated with an SELinux user map, the host-based access control rule cannot be deleted until it is removed from the SELinux user map configuration. SELinux user maps work with the System Security Services Daemon (SSSD) and the pam_selinux module. When a remote user attempts to log into a machine, SSSD checks its IdM identity provider to collect the user information, including any SELinux maps. The PAM module then processes the user and assigns it the appropriate SELinux user context. SSSD caching enables the mapping to work offline.
[ "semanage user -l Labelling MLS/ MLS/ SELinux User Prefix MCS Level MCS Range SELinux Roles guest_u user s0 s0 guest_r root user s0 s0-s0:c0.c1023 staff_r sysadm_r system_r unconfined_r staff_u user s0 s0-s0:c0.c1023 staff_r sysadm_r system_r unconfined_r sysadm_u user s0 s0-s0:c0.c1023 sysadm_r system_u user s0 s0-s0:c0.c1023 system_r unconfined_r unconfined_u user s0 s0-s0:c0.c1023 system_r unconfined_r user_u user s0 s0 user_r xguest_u user s0 s0 xguest_r" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/selinux-mapping
Chapter 3. Running the JBoss Server Migration Tool
Chapter 3. Running the JBoss Server Migration Tool You can run the JBoss Server Migration Tool in either of the following ways. Interactive mode : This mode, which is the default, allows you to choose exactly which configurations you want to migrate. Non-interactive mode : This mode allows you to run the tool without prompts. Important You must stop both the source and the target JBoss EAP servers before you run the JBoss Server Migration Tool. 3.1. Run the JBoss Server Migration Tool in Interactive Mode By default, the JBoss Server Migration Tool runs interactively. This mode allows you to choose exactly which server configurations you want to migrate. Note Interactive mode does not allow you to choose which subsystems to migrate. For information on how to configure the tool at the subsystem or task level, see Configure the Migration Tasks Performed by the JBoss Server Migration Tool . The following are the basic steps that are performed for a minimal migration. If the server from which you are migrating includes custom configurations, for example deployments, or if it is missing default resources, the tool provides additional prompts. To run the tool in interactive mode, navigate to the target server installation directory and run the following command, providing the source argument as the path to the source server installation. You are prompted to determine if you want to migrate the source server's standalone configurations, which are located in the EAP_PREVIOUS_HOME /standalone/configuration/ directory, to the target server's standalone configurations, which are located in the EAP_HOME /standalone/configuration/ directory. If you respond with no , standalone server migration is skipped and no standalone server configuration files are migrated. If you respond with yes , you see the following prompt. Respond with yes to migrate all of the source server's standalone server configuration files. Respond with no to receive a prompt for each individual standalone*.xml configuration file. , you are prompted to determine if you want to migrate the source server's managed domain configurations, which are located in the EAP_PREVIOUS_HOME /domain/configuration/ directory, to the target server's managed domain configurations, which are located in the EAP_HOME /domain/configuration/ directory. If you respond with no , managed domain migration is skipped and no managed domain configuration files are migrated. If you respond with yes , the tool begins migrating the managed domain content of the source server. A ciphered repository is used to store data, such as deployments and deployment overlays, that are referenced by the source server's managed domain and host configurations. Because the source and target servers use a similar content repository, the tool simply copies the data from the source server to the target server and prints the results to the console and the server log. , the migration tool scans the source server for managed domain configuration files, prints the results to the console, and provides the following prompt. Respond with yes to migrate all of the source server's managed domain configuration files. Respond with no to receive a prompt for each individual managed domain configuration file. , the migration tool scans the source server for host configurations files, prints the results to the console, and provides the following prompt. Respond with yes to migrate all of the source server's host configuration files. Respond with no to receive a prompt for each individual host configuration file. Upon completion, you should see the following message in the server console. 3.2. Run the JBoss Server Migration Tool in Non-interactive Mode You can run the JBoss Server Migration Tool in non-interactive mode. This mode allows it to run without prompts. Note The JBoss Server Migration Tool automatically migrates all subsystem configurations for all server configuration files. For information on how to configure the tool at the subsystem or task level, see Configure the Migration Tasks Performed by the JBoss Server Migration Tool . To run the tool in non-interactive mode, navigate to the target server installation directory and run the following command, providing the source argument as the path to the source server installation and setting the --interactive or -i argument to false . By default, the tool automatically migrates all of the source server's standalone and managed domain configuration files. However, you can configure the tool's properties to skip migration of specific configurations. Upon completion, you should see the following message in the server console.
[ "EAP_HOME /bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME", "Migrate the source's standalone server? yes/no? yes", "Migrate all configurations? yes/no? yes", "Migrate the source's managed domain? yes/no? yes", "INFO [ServerMigrationTask#397] Migrating domain content found: [22/caa450a9ba3b84eaf5a15b6da418b92ce6c98e/content, 23/b62a37ba8a4830622bfcdb960280577cc6796e/content] INFO [ServerMigrationTask#398] Resource with path / EAP_HOME /domain/data/content/22/caa450a9ba3b84eaf5a15b6da418b92ce6c98e/content migrated. INFO [ServerMigrationTask#399] Resource with path / EAP_HOME /domain/data/content/23/b62a37ba8a4830622bfcdb960280577cc6796e/content migrated.", "Migrate all configurations? yes/no? yes", "INFO [ServerMigrationTask#457] Retrieving source's host configurations INFO [ServerMigrationTask#457] /jboss-eap-6.4/domain/configuration/host-master.xml INFO [ServerMigrationTask#457] /jboss-eap-6.4/domain/configuration/host-slave.xml INFO [ServerMigrationTask#457] /jboss-eap-6.4/domain/configuration/host.xml Migrate all configurations? yes/no? yes", "Migration Result: SUCCESS", "EAP_HOME /bin/jboss-server-migration.sh --source EAP_PREVIOUS_HOME --interactive false", "Migration Result: SUCCESS" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_the_jboss_server_migration_tool/running_the_server_migration_tool
Chapter 10. Viewing diagnostics
Chapter 10. Viewing diagnostics Use the Diagnostics tab to view diagnostic information about the JVM via the JVM DiagnosticCommand and HotspotDiangostic interfaces. Note The functionality is similar to the Diagnostic Commands view in Java Mission Control (jmc) or the command line tool jcmd. The plugin will provide corresponding jcmd commands in some scenarios. Procedure To retrieve the number of instances of loaded classes and the amount of bytes they take up, click Class Histogram . If the operation is repeated, the tab shows the difference since last run. To view the JVM diagnostic flag setting, click the JVM flags . For a running JVM, you can also modify the flag settings. Additional resources The supported JVM depends on the platform, for more information go to one of the following sources: http://www.oracle.com/technetwork/java/vmoptions-jsp-140102.html http://openjdk.java.net/groups/hotspot/docs/RuntimeOverview.html
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_springboot_standalone/fuse-console-view-diagnostics-all_springboot
Schedule and quota APIs
Schedule and quota APIs OpenShift Container Platform 4.15 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/schedule_and_quota_apis/index
Chapter 4. New Features
Chapter 4. New Features This chapter documents new features and major enhancements introduced in Red Hat Enterprise Linux 7.7. 4.1. Authentication and Interoperability SSSD now fully supports sudo rules stored in AD The System Security Services Daemon (SSSD) now fully supports sudo rules stored in Active Directory (AD). This feature was first introduced in Red Hat Enterprise Linux 7.0 as a Technology Preview. Note that the administrator must update the AD schema to support sudo rules. ( BZ#1664447 ) SSSD no longer uses the fallback_homedir value from the [nss] section as fallback for AD domains Prior to RHEL 7.7, the SSSD fallback_homedir parameter in an Active Directory (AD) provider had no default value. If fallback_homedir was not set, SSSD used instead the value from the same parameter from the [nss] section in the /etc/sssd/sssd.conf file. To increase security, SSSD in RHEL 7.7 introduced a default value for fallback_homedir . As a consequence, SSSD no longer falls back to the value set in the [nss] section. If you want to use a different value than the default for the fallback_homedir parameter in an AD domain, you must manually set it in the domain's section. (BZ#1740779) Directory Server rebased to version 1.3.9.1 The 389-ds-base packages have been upgraded to upstream version 1.3.9.1, which provides a number of bug fixes and enhancements over the version. ( BZ#1645359 ) The Directory Server Auto Membership plug-in can now be additionally invoked by modify operations This update enhances the Auto Membership plug-in in Directory Server to work with modify operations. Previously, the plug-in was only invoked by ADD operations. When an administrator changed a user entry, and that change impacted what Auto Membership groups the user belonged to, the user was not removed from the old group and only added to the new group. With the enhancement provided by this update, users can now configure that Directory Server removes the user from the old group in the mentioned scenario. To enable the new behavior, set the autoMemberProcessModifyOps attribute in the cn=Auto Membership Plugin,cn=plugins,cn=config entry to on . (BZ#1438144) The replicaLastUpdateStatusJSON status attribute has been added to replication agreements in Directory Server This update introduces the replicaLastUpdateStatusJSON status attribute to the cn=<replication_agreement_name>,cn=replica,cn=<suffix_DN>,cn=mapping tree,cn=config entry. The status displayed in the replicaLastUpdateStatus attribute was vague and unclear. The new attribute provides a clear status message and result code and can be parsed by other applications that support the JSON format. ( BZ#1561769 ) IdM now provides a utility to promote a CA to a CRL generation master With this enhancement, administrators can promote an existing Identity Management (IdM) certificate authority (CA) to a certificate revocation list (CRL) generation master or remove this feature from a CA. Previously, multiple manual steps were required to configure an IdM CA as CRL generation master, and the procedure was error-prone. As a result, administrators can now use the ipa-crlgen-manage enable and ipa-crlgen-manage disable commands to enable and disable CRL generation on an IdM CA. ( BZ#1690037 ) A command to detect and remove orphaned automember rules has been added to IdM Automember rules in Identity Management (IdM) can refer to a hostgroup or a group that has been deleted. Previously, the ipa automember-rebuild command failed unexpectedly and it was difficult to diagnose the reason of the failure. This enhancement adds ipa automember-find-orphans to IdM to IdM to identify and remove such orphaned automember rules. ( BZ#1390757 ) IdM now supports IP addresses in the SAN extension of certificates In certain situations, administrators need to issue certificates with an IP address in the Subject Alternative Name (SAN) extension. This update adds this feature. As a result, administrators can set an IP address in the SAN extension if the address is managed in the IdM DNS service and associated with the subject host or service principal. ( BZ#1586268 ) IdM now supports renewing expired system certificates when the server is offline With this enhancement, administrators can renew expired system certificates when Identity Management (IdM) is offline. When a system certificate expires, IdM fails to start. The new ipa-cert-fix command replaces the workaround to manually set the date back to proceed with the renewal process. As a result, the downtime and support costs reduce in the mentioned scenario. ( BZ#1690191 ) pki-core rebased to version 10.5.16 The pki-core packages have been upgraded to upstream version 10.5.16, which provides a number of bug fixes and enhancements over the version. ( BZ#1633422 ) Certificate System can now create CSRs with SKI extension for external CA signing With this enhancement, Certificate System supports creating a certificate signing request (CSR) with the Subject Key Identifier (SKI) extension for external certificate authority (CA) signing. Certain CAs require this extension either with a particular value or derived from the CA public key. As a result, administrators can now use the pki_req_ski parameter in the configuration file passed to the pkispawn utility to create a CSR with SKI extension. (BZ#1491453) Uninstalling Certificate System no longer removes all log files Previously, Certificate System removed all corresponding logs when you uninstalled subsystems. With this update, by default, the pkidestroy utility no longer removes the logs. To remove the logs when you uninstall a subsystem, pass the new --remove-logs parameter to pkidestroy. Additionally, this update adds the --force parameter to pkidestroy. Previously, an incomplete installation left some files and directories, which prevented a complete uninstallation of a Certificate System instance. Pass --force to pkidestroy to completely remove a subsystem and all corresponding files of an instance. ( BZ#1372056 ) The pkispawn utility now supports using keys created in the NSS database during CA, KRA, and OCSP installations Previously, during a Certificate System installation, the pkispawn utility only supported creating new keys and importing existing keys for system certificates. With this enhancement, pkispawn now supports using keys the administrator generates directly in the NSS database during certificate authority (CA), key recovery authority (KRA), and online certificate status protocol (OCSP) installations. ( BZ#1616134 ) Certificate System now preserves the logs of installations when reinstalling the service Previously, the pkispawn utility reported a name collision error when installing a Certificate System subsystem on a server with an existing Certificate System log directory structure. With this enhancement, Certificate System reuses the existing log directory structure to preserve logs of installations. ( BZ#1644769 ) Certificate System now supports additional strong ciphers by default With this update, the following additional ciphers, which are compliant with the Federal Information Processing Standard (FIPS), are enabled by default in Certificate System: TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_256_GCM_SHA384 For a full list of enabled ciphers, enter: If you use a Hardware Security Module (HSM) with Certificate System, see the documentation of the HSM for supported ciphers. ( BZ#1554055 ) The samba packages have been to version 4.9.1 The samba packages have been upgraded to upstream version 4.9.1, which provides a number of bug fixes and enhancements over the version. The most notable changes include: The Clustered Trivial Database (CTDB) configuration has been changed completely. Administrators must now specify parameters for the ctdb service and corresponding utilities in the /etc/ctdb/ctdb.conf file in a format similar to the Samba configuration. For further details, see the ctdb.conf(5) man page. Use the /usr/share/doc/ctdb/examples/config_migrate.sh script to migrate the current configuration. The default values of the following parameters in the /etc/samba/smb.conf file have been changed as follows: map readonly : no store dos attributes : yes ea support : yes full_audit:success : Not set full_audit:failure : Not set The net ads setspn command has been added for managing Windows Service Principal Names (SPN) on Active Directory (AD). This command provides the same basic functionality as the setspn.exe utility on Windows. For example, administrators can use it to add, delete, and list Windows SPNs stored in an AD computer object. The net ads keytab add command no longer attempts to convert the service class passed to the command into a Windows SPN, which is then added to the AD computer object. By default, the command now only updates the keytab file. The new net ads add_update_ads command has been added to preserve the behavior. However, administrators should use the new net ads setspn add command instead. Samba automatically updates its tdb database files when the "smbd", "nmbd", or "winbind" daemon starts. Back up the databases files before starting Samba. Note that Red Hat does not support downgrading tdb database files. For further information about notable changes, read the upstream release notes before updating: https://www.samba.org/samba/history/samba-4.9.0.html ( BZ#1649434 ) 4.2. Clustering Maximum size of a supported RHEL HA cluster increased from 16 to 32 nodes With this release, Red Hat supports cluster deployments of up to 32 full cluster nodes. ( BZ#1374857 ) Improved status display of fencing actions The output of the pcs status command now shows failed and pending fence actions. (BZ#1461964) 4.3. Compiler and Tools New packages: python3 New python3 packages are available in RHEL 7, which provide the Python 3.6 interpreter, as well as the pip and setuptools utilities. Previously, Python 3 versions were available only as a part of Red Hat Software Collections. When installing, invoking, or otherwise interacting with Python 3, always specify the major version of Python. For example, to install Python 3, use the yum install python3 command. All Python-related commands should also include the version, for example, pip3 . Note that Python 3 is the default Python implementation in RHEL 8, so it is advisable to migrate your Python 2 code to Python 3. For more information on how to migrate large code bases to Python 3, see The Conservative Python 3 Porting Guide . (BZ#1597718) New packages: compat-sap-c++-8 The compat-sap-c++-8 packages contain the libstdc++ library named compat-sap-c++-8.so , which is a runtime compatibility library needed for SAP applications. The compat-sap-c++-8 packages are based on GCC 8. (BZ#1669683) The elfutils packages have been rebased to version 0.176 The elfutils packages have been upgraded to upstream version 0.176. Notable changes include: Various bugs related to multiple CVEs have been fixed. The libdw library has been extended with the dwelf_elf_begin() function which is a variant of elf_begin() that handles compressed files. The eu-readelf tool now recognizes and prints out GNU Property notes and GNU Build Attribute ELF Notes with the --notes or -n options. A new --reloc-debug-sections-only option has been added to the eu-strip tool to resolve all trivial relocations between debug sections in place without any other stripping. This functionality is relevant only for ET_REL files in certain circumstances. A new function dwarf_next_lines has been added to the libdw library. This function reads .debug_line data without CU. The dwarf_begin_elf function from the libdw library now accepts ELF files containing only .debug_line or .debug_frame sections. (BZ#1676504) gcc-libraries rebased to version 8.3.1 The gcc-libraries packages have been updated to the upstream version 8.3.1 which brings a number of bug fixes. (BZ#1551629) Geolite2 Databases are now available This update introduces Geolite2 Databases as an addition to the legacy Geolite Databases, provided by the GeoIP package. Geolite2 Databases are provided by multiple packages. The libmaxminddb package includes the library and the mmdblookup command line tool, which enables manual searching of addresses. The geoipupdate binary from the legacy GeoIP package is now provided by the geoipupdate package, and is capable of downloading both legacy databases and the new Geolite2 databases. The GeoIP package, together with the legacy database, is no longer supported in upstream, and is not distributed with RHEL 8. (BZ#1643472, BZ#1643470, BZ#1643464) Date formatting updates for the Japanese Reiwa era The GNU C Library now provides correct Japanese era name formatting for the Reiwa era starting on May 1st, 2019. The time handling API data has been updated, including the data used by the strftime and strptime functions. All APIs will correctly print the Reiwa era including when strftime is used along with one of the era conversion specifiers such as %EC , %EY , or %Ey . ( BZ#1555189 ) SystemTap rebased to version 4.0 The SystemTap instrumentation tool has been upgraded to upstream version 4.0. Notable improvements include: The extended Berkeley Packet Filter (eBPF) backend has been improved, especially for strings and functions. To use this backend, start SystemTap with the --runtime=bpf option. A new export network service for use with the Prometheus monitoring system has been added. The system call probing implementation has been improved to use the kernel tracepoints if necessary. ( BZ#1669605 ) Valgrind rebased to version 3.14 The Valgrind packages have been upgraded to upstream version 3.14, which provides a number of bug fixes and enhancements over the version: Valgrind can now process integer and string vector instructions for the z13 processor of the IBM Z architecture. An option --keep-debuginfo=no|yes has been added to retain debugging information for unloaded code. This allows saved stack traces to include file and line information in more cases. For more information and known limitations, see the Valgrind user manual. The Helgrind tool can now be configured to compute full history stack traces as deltas with the new --delta-stracktrace=yes|no option. As a result, keeping full Helgrind history with the --history-level=full option can be up to 25% faster when --delta-stracktrace=yes is added. False positive rate in the Memcheck tool has been reduced on the AMD64 and 64-bit ARM architectures. Notably, you can use the --expensive-definedness-checks=no|auto|yes option to control analysis for the expensive definedness checks without loss of precision. (BZ#1519410) Performance Co-Pilot rebased to version 4.3.2 The Performance Co-Pilot (PCP) has been updated to upstream version 4.3.2. Notable improvements include: The pcp-dstat tool now includes historical analysis and Comma-separated Values (CSV) format output. The log utilities can use metric labels and help text records. The pmdaperfevent tool now reports the correct CPU numbers at the lower Simultaneous Multi Threading (SMT) levels. The pmdapostgresql tool now supports Postgres series 10.x. The pmdaredis tool now supports Redis series 5.x. The pmdabcc tool has been enhanced with dynamic process filtering and per-process syscalls, ucalls, and ustat. The pmdammv tool now exports metric labels, and the format version is increased to 3. The pmdagfs2 tool supports additional glock and glock holder metrics. Several fixes have been made to the SELinux policy. The pmcd utility now supports PMDA suspend and resume (fencing) without configuration changes. Pressure-stall information metrics are now reported. Additional VDO metrics are now reported. The pcp-atop tool now reports statistics for pressure stall information, infiniband, perf_event, and NVIDIA GPUs. The pmlogger and pmie tools can now use systemd timers as an alternative to cron jobs. ( BZ#1647308 , BZ#1641161) ptp4l now supports team interfaces in active-backup mode With this update, support for team interfaces in active-backup mode has been added into the PTP Boundary/Ordinary Clock (ptp4l). (BZ#1650672) linuxptp rebased to version 2.0 The linuxptp packages have been upgraded to upstream version 2.0, which provides a number of bug fixes and enhancements over the version. The most notable features are as follows: Support for unicast messaging has been added Support for telecom G.8275.1 and G.8275.2 profiles has been added Support for the NetSync Monitor (NSM) protocol has been added Implementation of transparent clock (TC) has been added ( BZ#1623919 ) The DateTime::TimeZone Perl module is now aware of recent time zone updates The Olson time zone database has been updated to version 2018i. Previously, applications written in the Perl language that use the DateTime::TimeZone module mishandled time zones that changed their specifications since version 2017b due to the outdated database. ( BZ#1537984 ) The trace-cmd packages have been updated to version 2.7 The updated packages provide the latest bug fixes and upstream features. As a result, the Red Hat Enterprise Linux users can now use an up-to-date trace-cmd command. (BZ#1655111) vim rebased to version 7.4.629 The vim packages have been upgraded to upstream version 7.4.629, which is in RHEL 6. This version provides a number of bug fixes and enhancements over the version. Notable enhancements include the breakindent feature. For more information about the feature, see :help breakindent in Vim. ( BZ#1563419 ) 4.4. Desktop cups-filters updated The cups-filters packages, distributed in version 1.0.35, have been updated to provide the following enhancements: The cups-browsed daemon, which provides the functionality removed from CUPS since the version 1.5, has been rebased to version 1.13.4, excluding the support for CUPS temporary queues. A new backend, implicitclass , has been introduced to support high availability and load balancing. ( BZ#1485502 ) Mutter now allows for mass-deployable homogenized display configuration The Mutter window manager now makes it possible to deploy pre-set display configurations for all users on a system. As a result, Mutter no longer requires that the configuration for each user is copied to its own configuration directory, but it can use a system wide configuration file instead. This feature makes Mutter suitable for mass deployment of homogenized display configuration. To set the configuration for a single user, create and populate the ~/.config/monitors.xml file. For the login screen in particular, use the ~/gdm/.config/monitors.xml file. For system-wide configurations, use the /etc/xdg/monitors.xml file. ( BZ#1583825 ) 4.5. File Systems Improved quota reports The quota tool in non-verbose mode now distinguishes between a file system with no limits and a file system with limits but with no used resources. Previously, none was printed for both use cases, which was confusing. (BZ#1601109) 4.6. Installation and Booting The graphical installation program now detects if SMT is enabled Previously, the RHEL 7 graphical installation program did not detect if Simultaneous Multithreading (SMT) was enabled on a system. With this update, the installation program now detects if SMT is enabled on a system. If it is enabled, a warning message is displayed in the Status bar, which is located at the bottom of the Installation Summary window. (BZ#1678353) New --g-libs option for the find-debuginfo.sh script This update introduces the new --g-libs option for the find-debuginfo.sh script. This new option is an alternative to -g option, which instructed the script to remove only debugging symbols from both binary and library files. The new --g-libs option works the same way as -g , but only for library files. The binary files are stripped completely. ( BZ#1663264 ) The Image Builder rebased to version 19.7.33 and fully supported The Image Builder, provided by the lorax-composer package in the RHEL 7 Extras Channel, has been upgraded to version 19.7.33. Notable changes in this version include: The Image Builder, previously available as Technology Preview, is now fully supported. Cloud images can be built for Amazon Web Services, VMware vSphere, and OpenStack. A Red Hat Content Delivery Network (CDN) repository mirror is no longer needed. You can now set a host name and create users. Boot loader parameters can be set, such as disabling Simultaneous Multi-Threading (SMT) with the nosmt=force option. This is only possible from composer-cli tool on command line. The web console UI can now edit external repositories ("sources"). The Image Builder can now run with SElinux in enforcing mode. To access the Image Builder functionality, use a command-line interface in the composer-cli utility, or a graphical user interface in the RHEL 7 web console from the cockpit-composer package. ( BZ#1713880 , BZ#1656105 , BZ#1654795, BZ#1689314 , BZ#1688335 ) 4.7. Kernel Kernel version in RHEL 7.7 Red Hat Enterprise Linux 7.7 is distributed with the kernel version 3.10.0-1062. (BZ#1801759) Live patching for the kernel is now available Live patching for the kernel, kpatch , provides a mechanism to patch a running kernel without rebooting or restarting any processes. Live kernel patches will be provided for selected minor release streams of RHEL covered under the Extended Update Support (EUS) policy to remediate Critical and Important CVEs. To subscribe to the kpatch stream for the RHEL 7.7 version of kernel, install the kpatch-patch-3_10_0-1062 package provided by the RHEA-2019:2011 advisory. For more information, see Applying patches with kernel live patching in the Kernel Administration Guide. ( BZ#1728504 ) The IMA and EVM features are now supported on all architectures The Integrity Measurement Architecture (IMA) and Extended Verification Module (EVM) are now fully supported on all available architectures. In RHEL 7.6, they were supported only on the AMD64 and Intel 64 architecture. IMA and EVM enable the kernel to check the integrity of files at runtime using labels attached to extended attributes. You can use IMA and EVM to monitor if files have been accidentally or maliciously altered. The ima-evm-utils package provides userspace utilities to interface between user applications and the kernel features. (BZ#1636601) Spectre V2 mitigation default changed from IBRS to Retpoline in new installations of RHEL 7.7 The default mitigation for the Spectre V2 vulnerability (CVE-2017-5715) for systems with the 6th Generation Intel Core Processors and its close derivatives [1] has changed from Indirect Branch Restricted Speculation (IBRS) to Retpoline in new installations of RHEL 7.7. Red Hat has implemented this change as a result of Intel's recommendations to align with the defaults used in the Linux community and to restore lost performance. However, note that using Retpoline in some cases may not fully mitigate Spectre V2. Intel's Retpoline document [2] describes any cases of exposure. This document also states that the risk of an attack is low. For installations of RHEL 7.6 and prior, IBRS is still the default mitigation. New installations of RHEL 7.7 and later versions will have "spectre_v2=retpoline" added to the kernel command line. No change will be made for upgrades to RHEL 7.7 from earlier versions of RHEL 7. Note that users can select which spectre_v2 mitigation will be used. To select Retpoline: a) Add the "spectre_v2=retpoline" flag to the kernel command line, and reboot. b) Alternatively, issue the following command at runtime: "echo 1 > /sys/kernel/debug/x86/retp_enabled" To select IBRS: a) Remove the "spectre_v2=retpoline" flag from the kernel command line, and reboot. b) Alternatively, issue the following command at runtime: "echo 1 > /sys/kernel/debug/x86/ibrs_enabled" If one or more kernel modules were not built with Retpoline support, the /sys/devices/system/cpu/vulnerabilities/spectre_v2 file will indicate vulnerability and the /var/log/messages file will identify the offending modules. See How to determine which modules are responsible for spectre_v2 returning "Vulnerable: Retpoline with unsafe module(s)"? for further information. [1] "6th generation Intel Core Processors and its close derivatives" are what the Intel's Retpoline document refers to as "Skylake-generation". [2] Retpoline: A Branch Target Injection Mitigation - White Paper (BZ#1653428, BZ#1659626) PMTU discovery and route redirection is now supported with VXLAN and GENEVE tunnels Previously, the kernel in Red Hat Enterprise Linux (RHEL) did not handle Internet Control Message Protocol (ICMP) and ICMPv6 messages for Virtual Extensible LAN (VXLAN) and Generic Network Virtualization Encapsulation (GENEVE) tunnels. As a consequence, Path MTU (PMTU) discovery and route redirection was not supported with VXLAN and GENEVE tunnels. With this update, the kernel handles ICMP "Destination Unreachable" and "Redirect Message", as well as ICMPv6 "Packet Too Big" and "Destination Unreachable" error messages by adjusting the PMTU and modifying forwarding information. As a result, PMTU discovery and route redirection are now supported with VXLAN and GENEVE tunnels. (BZ#1511372) A new kernel command-line option to disable hardware transactional memory on IBM POWER RHEL 7.7 introduces the ppc_tm=off kernel command-line option. When the user passes ppc_tm=off at boot time, the kernel disables hardware transactional memory on IBM POWER systems and makes it unavailable to applications. Previously, the RHEL 7 kernel unconditionally made the hardware transactional memory feature on IBM POWER systems available to applications whenever it was supported by hardware and firmware. (BZ#1694778) Intel(R) Omni-Path Architecture (OPA) Host Software Intel(R) Omni-Path Architecture (OPA) host software is fully supported in Red Hat Enterprise Linux 7.7. Intel OPA provides Host Fabric Interface (HFI) hardware with initialization and setup for high performance data transfers (high bandwidth, high message rate, low latency) between compute and I/O nodes in a clustered environment. For instructions on installing Intel Omni-Path Architecture documentation, see: https://www.intel.com/content/dam/support/us/en/documents/network-and-i-o/fabric-products/Intel_OP_Software_RHEL_7_7_RN_K65224.pdf (BZ#1739072) IBPB cannot be directly disabled With this RHEL kernel source code update, it is not possible to directly disable the Indirect Branch Prediction Barrier (IBPB) control mechanism. Red Hat does not anticipate any performance issues from this setting. (BZ#1807647) 4.8. Real-Time Kernel kernel-rt source tree now matches the latest RHEL 7 tree The kernel-rt sources have been upgraded to be based on the latest Red Hat Enterprise Linux kernel source tree, which provides a number of bug fixes and enhancements over the version. ( BZ#1642619 ) The RHEL 7 kernel-rt timer wheel has been updated to a non-cascading timer wheel The current timer wheel has been switched to a non-cascading wheel which improves the timer subsystem and reduces the overheads on many operations. With the backport of the non-cascading timer wheel, kernel-rt is very close to the upstream kernel in enabling the backport of future improvements. ( BZ#1593361 ) 4.9. Networking rpz-drop now prevents BIND for repetitive resolving of unreachable domain The Berkeley Internet Name Domain (BIND) version distributed with RHEL 7.7 introduces the rpz-drop policy, which enables to mitigate DNS amplification attacks. Previously, if an attacker generated a lot of queries for an irresolvable domain, BIND was constantly trying to resolve such queries, which caused considerable load on CPU. With rpz-drop , BIND does not process the queries when the target domain is unreachable. This behavior significantly saves CPU capacity. (BZ#1325789) bind rebased to version 9.11 The bind packages have been upgraded to upstream version 9.11, which provides a number of bug fixes and enhancements over the version: New features: A new method of provisioning secondary servers called Catalog Zones has been added. Domain Name System Cookies can now be sent by the named service and the dig utility. The Response Rate Limiting feature can now help with mitigation of DNS amplification attacks. Performance of response-policy zone (RPZ) has been improved. A new zone file format called map has been added. Zone data stored in this format can be mapped directly into memory, which enables zones to load significantly faster. A new tool called delv (domain entity lookup and validation) for sending DNS queries and validating the results has been added. The tool uses the same internal resolver and validator logic as the named daemon. A new mdig command is now available. This command is a version of the dig command that sends multiple pipelined queries and then waits for responses, instead of sending one query and waiting for the response before sending the query. A new prefetch option, which improves the recursive resolver performance, has been added. A new in-view zone option, which allows zone data to be shared between views, has been added. When this option is used, multiple views can serve the same zones authoritatively without storing multiple copies in memory. A new max-zone-ttl option, which enforces maximum TTLs for zones, has been added. When a zone containing a higher TTL is loaded, the load fails. Dynamic DNS (DDNS) updates with higher TTLs are accepted but the TTL is truncated. New quotas have been added to limit queries that are sent by recursive resolvers to authoritative servers experiencing denial-of-service attacks. The nslookup utility now looks up both IPv6 and IPv4 addresses by default. The named service now checks whether other name server processes are running before starting up. When loading a signed zone, named now checks whether a Resource Record Signature's (RSIG) inception time is in the future, and if so, it regenerates the RRSIG immediately. Zone transfers now use smaller message sizes to improve message compression, which reduces network usage. Feature changes: The version 3 XML schema for the statistics channel, including new statistics and a flattened XML tree for faster parsing, is provided by the HTTP interface. The legacy version 2 XML schema is still the default format. ( BZ#1640561 , BZ#1578128 ) ipset rebased to version 7.1 The ipset packages have been upgraded to upstream version 7.1, which provides a number of bug fixes and enhancements over the version: The ipset protocol version 7 introduces the IPSET_CMD_GET_BYNAME and IPSET_CMD_GET_BYINDEX operations. Additionally, the user space component can now detect the exact compatibility level that the kernel component supports. A significant number of bugs have been fixed, such as memory leaks and use-after-free bugs. (BZ#1649080) NetworkManager now supports VLAN filtering on bridge interfaces With this enhancement, administrators can configure virtual LAN (VLAN) filtering on bridge interfaces in the corresponding NetworkManager connection profiles. This enables administrators to define VLANs directly on bridge ports. ( BZ#1652910 ) NetworkManager now supports configuring policy routing rules Previously, users must set up policy routing rules outside of NetworkManager , for example by using the dispatcher script provided by the NetworkManager-dispatcher-routing-rules package. With this update, users can now configure rules as part of a connection profile. As a result, NetworkManager adds the rules when the profile is activated and removes the rules when the profile is deactivated. ( BZ#1652653 ) 4.10. Security NSS now supports keys restricted to RSASSA-PSS The Network Security Services (NSS) library now supports keys restricted to Rivest-Shamir-Adleman Signature Scheme with Appendix - Probabilistic Signature Scheme (RSASSA-PSS). The legacy signature scheme, Public Key Cryptography Standard #1 (PKCS#1) v1.5, permits the keys to be reused for encrypting data or keys. This makes those keys vulnerable to signature forging attacks published by Bleichenbacher. Restricting the keys to the RSASSA-PSS algorithm makes them resilient to attacks that utilize decryption. With this update, NSS can be configured to support keys which are restricted to the RSASSA-PSS algorithm only. This enables the use of such keys included in X.509 certificates for both server and client authentication in TLS 1.2 and 1.3. ( BZ#1431241 ) NSS now accepts signatures with the NULL object only when correctly included in PKCS#1 v1.5 DigestInfo The first specification of PKCS#1 v1.5-compatible signatures used text that could be interpreted in two different ways. The encoding of parameters that are encrypted by the signer could include an encoding of a NULL ASN.1 object or omit it. Later revisions of the standard made the requirement to include the NULL object encoding explicit. versions of Network Security Service (NSS) tried to verify signatures while allowing either encoding. With this version, NSS accepts signatures only when they correctly include the NULL object in the DigestInfo structure in the PKSC#1 v1.5 signature. This change impacts interoperability with implementations that continue to create signatures that are not PKCS#1 v1.5-compliant. (BZ#1552854) OpenSC supports HID Crescendo 144K smart cards With this enhancement, OpenSC supports HID Crescendo 144K smart cards. These tokens are not fully compatible with the Common Access Card (CAC) specification. The token also use some more advanced parts of the specification than CAC tokens issued by the government. The OpenSC driver has been enhanced to manage these tokens and special cases of the CAC specification to support HID Crescendo 144K smart cards. (BZ#1612372) AES-GCM ciphers are enabled in OpenSSH in FIPS mode Previously, AES-GCM ciphers were allowed in FIPS mode only in TLS. In the current version, we clarified with NIST that these ciphers can be allowed and certified in OpenSSH , as well. As a result, the AES-GCM ciphers are allowed in OpenSSH running in FIPS mode. (BZ#1600869) SCAP Security Guide supports Universal Base Image SCAP Security Guide security policies have been enhanced to support Universal Base Image (UBI) containers and UBI images, including ubi-minimal images. This enables configuration compliance scanning of UBI containers and images using the atomic scan command. UBI containers and images can be scanned against any profile shipped in SCAP Security Guide . Only the rules that are relevant to secure configuration of UBI are evaluated, which prevents false positives and produces relevant results. The rules that are not applicable to UBI images and containers are skipped automatically. (BZ#1695213) scap-security-guide rebased to version 0.1.43 The scap-security-guide packages have been upgraded to upstream version 0.1.43, which provides a number of bug fixes and enhancements over the version, most notably: Minimum supported Ansible version changed to 2.5 New RHEL7 profile: VPP - Protection Profile for Virtualization v. 1.0 for Red Hat Enterprise Linux Hypervisor (RHELH) ( BZ#1684545 ) tangd_port_t allows changes of the default port for Tang This update introduces the tangd_port_t SELinux type that allows the tangd service run as confined with SELinux enforcing mode. That change helps to simplify configuring a Tang server to listen on a user-defined port and it also preserves the security level provided by SELinux in enforcing mode. ( BZ#1650909 ) A new SELinux type: boltd_t A new SELinux type, boltd_t , confines boltd , a system daemon for managing Thunderbolt 3 devices. As a result, boltd now runs as a confined service in SELinux enforcing mode. ( BZ#1589086 ) A new SELinux policy class: bpf A new SELinux policy class, bpf , has been introduced. The bpf class enables users to control the Berkeley Packet Filter (BPF) flow through SElinux, and allows inspection and simple manipulation of Extended Berkeley Packet Filter (eBPF) programs and maps controlled by SELinux. (BZ#1626115) shadow-utils rebased to version 4.6 The shadow-utils packages have been upgraded to upstream version 4.6, which provides a number of bug fixes and enhancements over the version, most notably the newuidmap and newgidmap commands for manipulating the UID and GID namespace mapping. ( BZ#1498628 ) 4.11. Servers and Services chrony rebased to version 3.4 The chrony packages have been upgraded to upstream version 3.4, which provides a number of bug fixes and enhancements over the version, notably: The support for hardware time stamping has received improvements. The range of supported polling intervals has been extended. Burst and filter options have been added to NTP sources. A pid file has been moved to prevent the chronyd -q command from breaking the system service. An compatibility with NTPv1 clients has been fixed. ( BZ#1636117 ) GNU enscript now supports ISO-8859-15 encoding With this update, support for ISO-8859-15 encoding has been added into the GNU enscript program. ( BZ#1573876 ) ghostscript rebased to version 9.25 The ghostscript packages have been upgraded to upstream version 9.25, which provides a number of bug fixes and enhancements over the version. (BZ#1636115) libssh2 package rebased to version 1.8.0 This update rebases the libssh2 package to version 1.8.0. This version includes the following: Added support for HMAC-SHA-256 and HMAC-SHA-512 Added support for diffie-hellman-group-exchange-sha256 key exchange Fixed many small bugs in the code (BZ#1592784) ReaR updates ReaR has been updated to a later version. Notable bug fixes and enhancements over the version include: Shared libraries provided by the system are now correctly added into the ReaR rescue system in cases where additional libraries of the same name are needed by the backup mechanism. Verification of NetBackup binaries is performed using the correct libraries, so the verification no longer fails when creating the rescue image. As a result, you can now use NetBackup as a backup mechanism with ReaR. Note that this applies only for NetBackup versions prior to NetBackup 8.0.0. Note that it is currently impossible to use NetBackup 8.0.0 and later versions due to other unresolved problems. Creation of a rescue image in cases with large number of multipath devices now proceeds faster. Scanning of devices has been improved in the following ways: Scanning uses caching to avoid querying the multipath devices multiple times. Scanning queries only device-mapper devices for device-mapper specific information. Scanning avoids collecting information about FibreChannel devices. Several bugs in ReaR affecting complex network configurations have been fixed: The Link Aggregation Control Protocol (LACP) configuration is now correctly restored in the rescue system in cases when teaming, or bonding with the SIMPLIFY_BONDING option, is used together with LACP. ReaR now correctly restores the configuration of the interface in the rescue system in cases when a network interface is renamed from the standard name, such as ethX , to a custom name. ReaR has been fixed to record a correct MAC address of the network interfaces in cases when bonding or teaming is used. ReaR has been fixed to correctly report errors when saving the rescue image. Previously, such errors resulted only in creation of unusable rescue images. As a result of the fix, ReaR now fails in such cases, so the problem can be properly investigated. The computation of disk layout for disks with a logical sector size different from 512 bytes has been fixed. ReaR now properly sets the bootlist during a restore on IBM Power Systems that use more than one bootable disk. ReaR now properly excludes its temporary directory from backup when an alternate temporary directory is specified using the TMPDIR environment variable. ReaR now depends on the xorriso packages instead of on the genisoimage package for ISO image generation. This makes it possible to create an image with a file larger than 4 GB, which occurs especially when creating an image with an embedded backup. (BZ#1652828, BZ#1652853 , BZ#1631183 , BZ#1610638, BZ#1426341 , BZ#1655956 , BZ#1462189 , BZ#1700807 ) tuned rebased to version 2.11 The tuned packages have been upgraded to upstream version 2.11, which provides a number of bug fixes and enhancements over the version, notably: Support for boot loader specification (BLS) has been added. (BZ#1576435) The mssql profile has been updated. (BZ#1660178) The virtual-host profile has been updated. (BZ#1569375) A range feature for CPU exclusion has been added. (BZ#1533908) Profile configuration now automatically reloads when the tuned service detects the hang-up signal (SIGHUP). (BZ#1631744) For full list of changes see the upstream git log: https://github.com/redhat-performance/tuned/commits/v2.11.0 ( BZ#1643654 ) New packages: xorriso Xorriso is a program for creating and manipulating ISO 9660 images, and for writing CD-ROMs or DVD-ROMs. The program includes the xorrisofs command, which is a recommended replacement for the genisoimage utility. The xorrisofs command has a compatible interface with genisoimage , and provides multiple enhancements over genisoimage . For example, with xorrisofs , maximum file size is no longer limited to 4 GB. Xorriso is suitable for backups, and it is used by Relax-and-Recover (ReaR), a recovery and system migration utility. (BZ#1638857) 4.12. Storage Support for Data Integrity Field/Data Integrity Extension (DIF/DIX) DIF/DIX is supported on configurations where the hardware vendor has qualified it and provides full support for the particular host bus adapter (HBA) and storage array configuration on RHEL. DIF/DIX is not supported on the following configurations: It is not supported for use on the boot device. It is not supported on virtualized guests. Red Hat does not support using the Automatic Storage Management library (ASMLib) when DIF/DIX is enabled. DIF/DIX is enabled or disabled at the storage device, which involves various layers up to (and including) the application. The method for activating the DIF on storage devices is device-dependent. For further information on the DIF/DIX feature, see What is DIF/DIX . (BZ#1649493) New scan_lvs configuration setting A new lvm.conf configuration file setting, scan_lvs , has been added and set to 0 by default. The new default behavior stops LVM from looking for PVs that may exist on top of LVs; that is, it will not scan active LVs for more PVs. The default setting also prevents LVM from creating PVs on top of LVs. Layering PVs on top of LVs can occur by way of VM images placed on top of LVs, in which case it is not safe for the host to access the PVs. Avoiding this unsafe access is the primary reason for the new default behavior. Also, in environments with many active LVs, the amount of device scanning done by LVM can be significantly decreased. The behavior can be restored by changing this setting to 1. ( BZ#1674563 ) 4.13. System and Subscription Management The web console rebased to version 195 The web console, provided by the cockpit packages, has been upgraded to version 195, which provides a number of new features and bug fixes. The cockpit packages distributed in the Base channel of RHEL 7 include the following features: You can now open individual ports for services in the firewall. The firewall page now enables adding and removing firewall zones and adding services to a specific zone. Cockpit can now help you with enabling certain security vulnerability mitigations, starting with the disabling SMT (Simultaneous Multi-Threading) option. The cockpit packages distributed in the Extras channel of RHEL 7 have been updated to version 151.1, which provides the following additional features: You can now add an iSCSI direct target as a storage pool for your virtual machines. Notifications about virtual machines have been streamlined and use a common presentation now. You can select encryption type separately from the file system. With this update, support for the Internet Explorer browser has been removed from the RHEL 7 web console. Attempting to open the web console in Internet Explorer now displays an error screen with a list of recommended browsers that can be used instead. ( BZ#1712833 ) 4.14. Virtualization virt-v2v can now convert SUSE Linux VMs You can now use the virt-v2v utility to convert virtual machines (VMs) that use SUSE Linux Enterprise Server (SLES) and SUSE Linux Enterprise Desktop (SLED) guest operating systems (OSs) from non-KVM hypervisors to KVM. Note that the conversion is only supported for SLES or SLED guest OSs version 11 Service Pack 4 or later. In addition, SLES 11 and SLED 11 VMs that use X graphics need to be re-adjusted after the conversion for the graphics to work properly. To do so, use the sax2 distribution tool in the guest OS after the migration is finished. (BZ#1463620) virt-v2v can now use vmx configuration files to convert VMware guests The virt-v2v utility now includes the vmx input mode, which enables the user to convert a guest virtual machine from a VMware vmx configuration file. Note that to do this, you also need access to the corresponding VMware storage, for example by mounting the storage using NFS. It is also possible to access the storage using SSH, by adding the -it ssh parameter. ( BZ#1441197 ) virt-v2v converts VMWare guests faster and more reliably The virt-v2v utility can now use the VMWare Virtual Disk Development Kit (VDDK) to convert a VMWare guest virtual machine to a KVM guest. This enables virt-v2v to connect directly to the VMWare ESXi hypervisor, which improves the speed and reliability of the conversion. Note that this conversion import method requires the external nbdkit utility and its VDDK plug-in. (BZ#1477912) virt-v2v can convert UEFI guests for RHV Using the virt-v2v utility, it is now possible to convert virtual machines that use the UEFI firmware to run in Red Hat Virtualization (RHV). ( BZ#1509931 ) virt-v2v removes VMware Tools more reliably This update makes it more likely that the virt-v2v utility automatically attempts to remove VMware Tools software from a VMware virtual machine that virt-v2v is converting to KVM. Notably, virt-v2v now attempts to remove VMWare Tools in the following scenarios: When converting Windows virtual machines. When VMMware Tools were installed on a Linux virtual machine from a tarball. When WMware Tools were installed as open-vm-tools . ( BZ#1481930 ) 4.15. Atomic Host and Containers Red Hat Enterprise Linux Atomic Host is a secure, lightweight, and minimal-footprint operating system optimized to run Linux containers. 4.16. Red Hat Software Collections Red Hat Software Collections is a Red Hat content set that provides a set of dynamic programming languages, database servers, and related packages that you can install and use on all supported releases of Red Hat Enterprise Linux 7 on AMD64 and Intel 64 architectures, the 64-bit ARM architecture, IBM Z, and IBM POWER, little endian. Certain components are available also for all supported releases of Red Hat Enterprise Linux 6 on AMD64 and Intel 64 architectures. Red Hat Developer Toolset is designed for developers working on the Red Hat Enterprise Linux platform. It provides current versions of the GNU Compiler Collection, GNU Debugger, and other development, debugging, and performance monitoring tools. Red Hat Developer Toolset is included as a separate Software Collection. Dynamic languages, database servers, and other tools distributed with Red Hat Software Collections do not replace the default system tools provided with Red Hat Enterprise Linux, nor are they used in preference to these tools. Red Hat Software Collections uses an alternative packaging mechanism based on the scl utility to provide a parallel set of packages. This set enables optional use of alternative package versions on Red Hat Enterprise Linux. By using the scl utility, users can choose which package version they want to run at any time. Important Red Hat Software Collections has a shorter life cycle and support term than Red Hat Enterprise Linux. For more information, see the Red Hat Software Collections Product Life Cycle . See the Red Hat Software Collections documentation for the components included in the set, system requirements, known problems, usage, and specifics of individual Software Collections. See the Red Hat Developer Toolset documentation for more information about the components included in this Software Collection, installation, usage, known problems, and more.
[ "/usr/lib64/nss/unsupported-tools/listsuites | grep -B1 --no-group-separator \"Enabled\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.7_release_notes/new_features
Chapter 22. consumer
Chapter 22. consumer This chapter describes the commands under the consumer command. 22.1. consumer create Create new consumer Usage: Table 22.1. Command arguments Value Summary -h, --help Show this help message and exit --description <description> New consumer description Table 22.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 22.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 22.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 22.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 22.2. consumer delete Delete consumer(s) Usage: Table 22.6. Positional arguments Value Summary <consumer> Consumer(s) to delete Table 22.7. Command arguments Value Summary -h, --help Show this help message and exit 22.3. consumer list List consumers Usage: Table 22.8. Command arguments Value Summary -h, --help Show this help message and exit Table 22.9. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 22.10. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 22.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 22.12. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 22.4. consumer set Set consumer properties Usage: Table 22.13. Positional arguments Value Summary <consumer> Consumer to modify Table 22.14. Command arguments Value Summary -h, --help Show this help message and exit --description <description> New consumer description 22.5. consumer show Display consumer details Usage: Table 22.15. Positional arguments Value Summary <consumer> Consumer to display Table 22.16. Command arguments Value Summary -h, --help Show this help message and exit Table 22.17. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 22.18. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 22.19. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 22.20. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack consumer create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>]", "openstack consumer delete [-h] <consumer> [<consumer> ...]", "openstack consumer list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]", "openstack consumer set [-h] [--description <description>] <consumer>", "openstack consumer show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <consumer>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/command_line_interface_reference/consumer
Chapter 39. Downloading and installing the headless Process Automation Manager controller
Chapter 39. Downloading and installing the headless Process Automation Manager controller You can configure KIE Server to run in managed or unmanaged mode. If KIE Server is unmanaged, you must manually create and maintain KIE containers (deployment units). If KIE Server is managed, the Process Automation Manager controller manages the KIE Server configuration and you interact with the Process Automation Manager controller to create and maintain KIE containers. The Process Automation Manager controller is integrated with Business Central. If you install Business Central, use the Execution Server page to create and maintain KIE containers. However, if you do not install Business Central, you can install the headless Process Automation Manager controller and use the REST API or the KIE Server Java Client API to interact with it. Prerequisites The Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) file has been downloaded, as described in Chapter 34, Downloading the Red Hat Process Automation Manager installation files . A Red Hat JBoss Web Server 5.5.1 server installation is available. The base directory of the Red Hat JBoss Web Server installation is referred to as JWS_HOME . Sufficient user permissions to complete the installation are granted. Procedure Extract the rhpam-7.13.5-add-ons.zip file. The rhpam-7.13.5-controller-jws.zip file is in the extracted directory. Extract the rhpam-7.13.5-controller-jws.zip archive to a temporary directory. In the following examples this directory is called TEMP_DIR . Copy the TEMP_DIR /rhpam-7.13.5-controller-jws.zip/controller.war directory to the JWS_HOME /tomcat/webapps directory. Note Ensure the names of the Red Hat Process Automation Manager deployments you copy do not conflict with your existing deployments in the Red Hat JBoss Web Server instance. Remove the .war extensions from the controller.war folder. Copy the contents of the TEMP_DIR /rhpam-7.13.5-controller-jws/SecurityPolicy/ directory to JWS_HOME /bin When prompted to overwrite files, select Yes . Add the kie-server role and user to the JWS_HOME /tomcat/conf/tomcat-users.xml file. In the following example, <USER_NAME> and <PASSWORD> are the user name and password of your choice: Complete one of the following tasks in the JWS_HOME /tomcat/bin directory of the instance running KIE Server: On Linux or UNIX, create the setenv.sh file with the following content: On Windows, add the following content to the setenv.bat file: In the preceding examples, replace the following variables: Replace <CONTROLLER_USER> and <CONTROLLER_PWD> with the user name and password for the kie-server role that you defined earlier in this procedure. Replace <KIE_SERVER_ID> with a unique identifier. Replace <CONTROLLER_HOST>:<CONTROLLER_PORT> with the IP address (host and port) of the controller. If you use the same server for KIE Server and the controller, <CONTROLLER_HOST>:<CONTROLLER_PORT> is localhost:8080 . In the JWS_HOME /tomcat/bin directory of the instance running the headless Process Automation Manager controller, create a readable setenv.sh file with the following content, where <USERNAME> is the KIE Server user and <USER_PWD> is the password for that user: CATALINA_OPTS="-Dorg.kie.server.user=<USERNAME> -Dorg.kie.server.pwd=<USER_PWD>" To start the headless Process Automation Manager controller, enter one of the following commands in the JWS_HOME /tomcat/bin directory: On Linux or UNIX-based systems: USD ./startup.sh On Windows: startup.bat After a few minutes, review the JWS_HOME /tomcat/logs directory and correct any errors. To verify that the headless Process Automation Manager controller is working correctly, enter http://<CONTROLLER_HOST>:<CONTROLLER_PORT>/controller/rest/controller/management/servers in a web browser. If you use the same server for KIE Server and the controller, <CONTROLLER_HOST>:<CONTROLLER_PORT> is localhost:8080 . Enter the user name and password stored in the tomcat-users.xml file.
[ "<role rolename=\"kie-server\"/> <user username=\"<USER_NAME>\" password=\"<PASSWORD>\" roles=\"kie-server\"/>", "CATALINA_OPTS=\"-Xmx1024m -Dorg.jboss.logging.provider=jdk -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTROLLER_PWD> -Dorg.kie.server.id=<KIE_SERVER_ID> -Dorg.kie.server.location=http://<HOST>:<PORT>/kie-server/services/rest/server -Dorg.kie.server.controller=http://<HOST>:<PORT>/controller/rest/controller\"", "set CATALINA_OPTS=-Xmx1024m -Dorg.jboss.logging.provider=jdk -Dorg.kie.server.controller.user=<CONTROLLER_USER> -Dorg.kie.server.controller.pwd=<CONTROLLER_PWD> -Dorg.kie.server.id=<KIE_SERVER_ID> -Dorg.kie.server.location=http://<HOST>:<PORT>/kie-server/services/rest/server -Dorg.kie.server.controller=http://<HOST>:<PORT>/controller/rest/controller", "./startup.sh", "startup.bat" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/controller-jws-install-proc_install-on-jws
Chapter 17. Upgrading to OpenShift Data Foundation
Chapter 17. Upgrading to OpenShift Data Foundation 17.1. Overview of the OpenShift Data Foundation update process This chapter helps you to upgrade between the minor releases and z-streams for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. You can upgrade OpenShift Data Foundation and its components, either between minor releases like 4.16 and 4.17, or between z-stream updates like 4.16.0 and 4.16.1 by enabling automatic updates (if not done so during operator installation) or performing manual updates. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic. Extended Update Support (EUS) EUS to EUS upgrade in OpenShift Data Foundation is sequential and it is aligned with OpenShift upgrade. For more information, see Performing an EUS-to-EUS update and EUS-to-EUS update for layered products and Operators installed through Operator Lifecycle Manager . For EUS upgrade of OpenShift Container Platform and OpenShift Data Foundation, make sure that OpenShift Data Foundation is upgraded along with OpenShift Container Platform and compatibility between OpenShift Data Foundation and OpenShift Container Platform is always maintained. Example workflow of EUS upgrade: Pause the worker machine pools. Update OpenShift <4.y> OpenShift <4.y+1>. Update OpenShift Data Foundation <4.y> OpenShift Data Foundation <4.y+1>. Update OpenShift <4.y+1> OpenShift <4.y+2>. Update to OpenShift Data Foundation <4.y+2>. Unpause the worker machine pools. Note You can update to ODF <4.y+2> either before or after worker machine pools are unpaused. Important When you update OpenShift Data Foundation in external mode, make sure that the Red Had Ceph Storage and OpenShift Data Foundation versions are compatible. For more information about supported Red Had Ceph Storage version in external mode, refer to Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Provide the required OpenShift Data Foundation version in the checker to see the supported Red Had Ceph version corresponding to the version in use. You also need to upgrade the different parts of Red Hat OpenShift Data Foundation in the following order for both internal and external mode deployments: Update OpenShift Container Platform according to the Updating clusters documentation for OpenShift Container Platform. Update Red Hat OpenShift Data Foundation. To prepare a disconnected environment for updates , see Operators guide to using Operator Lifecycle Manager on restricted networks to be able to update OpenShift Data Foundation as well as Local Storage Operator when in use. For updating between minor releases , see Updating Red Hat OpenShift Data Foundation 4.14 to 4.15 . For updating between z-stream releases , see Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y . For updating external mode deployments , you must also perform the steps from section Updating the Red Hat OpenShift Data Foundation external secret . If you use local storage, then update the Local Storage operator . See Checking for Local Storage Operator deployments if you are unsure. Important If you have an existing setup of OpenShift Data Foundation 4.12 with disaster recovery (DR) enabled, ensure to update all your clusters in the environment at the same time and avoid updating a single cluster. This is to avoid any potential issues and maintain best compatibility. It is also important to maintain consistency across all OpenShift Data Foundation DR instances. Update considerations Review the following important considerations before you begin. The Red Hat OpenShift Container Platform version is the same as Red Hat OpenShift Data Foundation. See the Interoperability Matrix for more information about supported combinations of OpenShift Container Platform and Red Hat OpenShift Data Foundation. To know whether your cluster was deployed in internal or external mode, refer to the knowledgebase article on How to determine if ODF cluster has storage in internal or external mode . The Local Storage Operator is fully supported only when the Local Storage Operator version matches the Red Hat OpenShift Container Platform version. In OpenShift Data Foundation clusters with disaster recovery (DR) enabled, during upgrade to version 4.18, bluestore-rdr OSDs are migrated to bluestore OSDs. bluestore backed OSDs now provide the same improved performance of bluestore-rdr based OSDs, which is important when the cluster is required to be used for Regional Disaster Recovery. During upgrade you can view the status of the OSD migration. In the OpenShift Web Console, navigate to Storage Data Foundation Storage System . In the Activity card of the Block and File tab you can view ongoing activities. Migrating cluster OSDs shows the status of the migration from bluestore-rdr to bluestore . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 17.2. Updating Red Hat OpenShift Data Foundation 4.17 to 4.18 This chapter helps you to upgrade between the minor releases for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. You must upgrade Red Hat Ceph Storage along with OpenShift Data Foundation to get new feature support, security fixes, and other bug fixes. As there is no dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. For more information about RHCS releases, see the knowledgebase solution, solution . Important Upgrading to 4.18 directly from any version older than 4.17 is not supported. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.18.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of both Overview - Block and File and Object tabs. Green tick indicates that the storage cluster , object service and data resiliency are all healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Optional: To reduce the upgrade time for large clusters that are using CSI plugins, make sure to tune the following parameters in the rook-ceph-operator-config configmap to a higher count or percentage. CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE Note By default, the rook-ceph-operator-config configmap is empty and you need to add the data key. This affects CephFS and CephRBD daemonsets and allows the pods to restart simultaneously or be unavailable and reduce the upgrade time. For an optimal value, you can set the parameter values to 20%. However, if the value is too high, disruption for new volumes might be observed during the upgrade. Prerequisite relevant only for OpenShift Data Foundation deployments on AWS using AWS Security Token Service (STS) Add another entry in the trust policy for noobaa-core account as follows: Log into AWS web console where the AWS role resides using http://console.aws.amazon.com/ . Enter the IAM management tool and click Roles . Find the name of the role created for AWS STS to support Multicloud Object Gateway (MCG) authentication using the following command in OpenShift CLI: Search for the role name that you obtained from the step in the tool and click on the role name. Under the role summary, click Trust relationships . In the Trusted entities tab, click Edit trust policy on the right. Under the "Action": "sts:AssumeRoleWithWebIdentity" field, there are two fields to enable access for two NooBaa service accounts noobaa and noobaa-endpoint . Add another entry for the core pod's new service account name, system:serviceaccount:openshift-storage:noobaa-core . Click Update policy at the bottom right of the page. The update might take about 5 minutes to get in place. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Click the OpenShift Data Foundation operator name. Click the Subscription tab and click the link under Update Channel . Select the stable-4.18 update channel and Save it. If the Upgrade status shows requires approval , click on requires approval . On the Install Plan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Navigate to Operators Installed Operators . Select the openshift-storage project. Wait for the OpenShift Data Foundation Operator Status to change to Up to date . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Note After upgrading, if your cluster has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview- Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. If verification steps fail, contact Red Hat Support . Important After updating external mode deployments, you must also update the external secret. For instructions, see Updating the OpenShift Data Foundation external secret . Additional Resources If you face any issues while updating OpenShift Data Foundation, see the Commonly required logs for troubleshooting section in the Troubleshooting guide . 17.3. Updating Red Hat OpenShift Data Foundation 4.17.x to 4.17.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.17.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support . 17.4. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
[ "oc get deployment noobaa-operator -o yaml -n openshift-storage | grep ROLEARN -A1 value: arn:aws:iam::123456789101:role/your-role-name-here" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/upgrading-your-cluster_osp
11.5. Naming Scheme for VLAN Interfaces
11.5. Naming Scheme for VLAN Interfaces Traditionally, VLAN interface names in the format: interface-name . VLAN-ID are used. The VLAN-ID ranges from 0 to 4096 , which is a maximum of four characters and the total interface name has a limit of 15 characters. The maximum interface name length is defined by the kernel headers and is a global limit, affecting all applications. In Red Hat Enterprise Linux 7, four naming conventions for VLAN interface names are supported: VLAN plus VLAN ID The word vlan plus the VLAN ID. For example: vlan0005 VLAN plus VLAN ID without padding The word vlan plus the VLAN ID without padding by means of additional leading zeros. For example: vlan5 Device name plus VLAN ID The name of the parent interface plus the VLAN ID. For example: enp1s0.0005 Device name plus VLAN ID without padding The name of the parent interface plus the VLAN ID without padding by means of additional leading zeros. For example: enp1s0.5
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-naming_scheme_for_vlan_interfaces
20.37. Managing Virtual Networks
20.37. Managing Virtual Networks This section covers managing virtual networks with the virsh command. To list virtual networks: This command generates output similar to: To view network information for a specific virtual network: This displays information about a specified virtual network in XML format: Other virsh commands used in managing virtual networks are: virsh net-autostart network-name : Marks a network-name to be started automatically when the libvirt daemon starts. The --disable option un-marks the network-name . virsh net-create XMLfile : Starts a new (transient) network using an XML definition from an existing file. virsh net-define XMLfile : Defines a new network using an XML definition from an existing file without starting it. virsh net-destroy network-name : Destroys a network specified as network-name . virsh net-name networkUUID : Converts a specified networkUUID to a network name. virsh net-uuid network-name : Converts a specified network-name to a network UUID. virsh net-start nameOfInactiveNetwork : Starts an inactive network. virsh net-undefine nameOfInactiveNetwork : Removes the inactive XML definition of a network. This has no effect on the network state. If the domain is running when this command is executed, the network continues running. However, the network becomes transient instead of persistent. libvirt has the capability to define virtual networks which can then be used by domains and linked to actual network devices. For more detailed information about this feature see the documentation at libvirt upstream website . Many of the commands for virtual networks are similar to the ones used for domains, but the way to name a virtual network is either by its name or UUID. 20.37.1. Autostarting a Virtual Network The virsh net-autostart command configures a virtual network to be started automatically when the guest virtual machine boots. This command accepts the --disable option, which disables the autostart command. 20.37.2. Creating a Virtual Network from an XML File The virsh net-create command creates a virtual network from an XML file. To get a description of the XML network format used by libvirt , see the libvirt upstream website . In this command file is the path to the XML file. To create the virtual network from an XML file, run: 20.37.3. Defining a Virtual Network from an XML File The virsh net-define command defines a virtual network from an XML file, the network is just defined but not instantiated. 20.37.4. Stopping a Virtual Network The virsh net-destroy command destroys (stops) a given virtual network specified by its name or UUID. This takes effect immediately. To stop the specified network network is required. 20.37.5. Creating a Dump File The virsh net-dumpxml command outputs the virtual network information as an XML dump to stdout for the specified virtual network. If --inactive is specified, physical functions are not expanded into their associated virtual functions. 20.37.6. Editing a Virtual Network's XML Configuration File The following command edits the XML configuration file for a network: The editor used for editing the XML file can be supplied by the USDVISUAL or USDEDITOR environment variables, and defaults to vi . 20.37.7. Getting Information about a Virtual Network The virsh net-info returns basic information about the network object. 20.37.8. Listing Information about a Virtual Network The virsh net-list command returns the list of active networks. If --all is specified this will also include defined but inactive networks. If --inactive is specified only the inactive ones will be listed. You may also want to filter the returned networks by --persistent to list the persistent ones, --transient to list the transient ones, --autostart to list the ones with autostart enabled, and --no-autostart to list the ones with autostart disabled. Note: When talking to older servers, this command is forced to use a series of API calls with an inherent race, where a pool might not be listed or might appear more than once if it changed state between calls while the list was being collected. Newer servers do not have this problem. To list the virtual networks, run: 20.37.9. Converting a Network UUID to Network Name The virsh net-name command converts a network UUID to network name. 20.37.10. Converting a Network Name to Network UUID The virsh net-uuid command converts a network name to network UUID. 20.37.11. Starting a Previously Defined Inactive Network The virsh net-start command starts a (previously defined) inactive network. 20.37.12. Undefining the Configuration for an Inactive Network The virsh net-undefine command undefines the configuration for an inactive network. 20.37.13. Updating an Existing Network Definition File The virsh net-update command updates a specified section of an existing network definition by issuing one of the following directives to the section: add-first add-last or add (these are synonymous) delete modify The section can be one of the following: bridge domain ip ip-dhcp-host ip-dhcp-range forward forward interface forward-pf portgroup dns-host dns-txt dns-srv Each section is named by a concatenation of the XML element hierarchy leading to the element that is changed. For example, ip-dhcp-host changes a <host> element that is contained inside a <dhcp> element inside an <ip> element of the network. XML is either the text of a complete XML element of the type being changed (for instance, <host mac="00:11:22:33:44:55' ip='1.2.3.4'/> ), or the name of a file that contains a complete XML element. Disambiguation is done by looking at the first character of the provided text - if the first character is < , it is XML text, if the first character is not > , it is the name of a file that contains the xml text to be used. The --parent-index option is used to specify which of several parent elements the requested element is in (0-based). For example, a dhcp <host> element could be in any one of multiple <ip> elements in the network; if a parent-index is not provided, the most appropriate <ip> element will be selected (usually the only one that already has a <dhcp> element), but if --parent-index is given, that particular instance of <ip> will get the modification. If --live is specified, affect a running network. If --config is specified, affect the startup of a persistent network. If --current is specified, affect the current network state. Both --live and --config flags may be given, but --current is exclusive. Not specifying any flag is the same as specifying --current . 20.37.14. Migrating Guest Virtual Machines with virsh Information on migration using virsh is located in the section entitled Live KVM Migration with virsh See Section 15.5, "Live KVM Migration with virsh" 20.37.15. Setting a Static IP Address for the Guest Virtual Machine In cases where a guest virtual machine is configured to acquire its IP address from DHCP, but you still need it to have a predictable static IP address, you can use the following procedure to modify the DHCP server configuration used by libvirt . This procedure requires that you know the MAC address of the guest interface in order to make this change. Therefore, you will need to perform the operation after the guest has been created, or decide on a MAC address for the guest prior to creating it, and then set this same address manually when creating the guest virtual machine. In addition, you should note that this procedure only works for guest interfaces that are connected to a libvirt virtual network with a forwarding mode of "nat" , "route" , or no forwarding mode at all. This procedure will not work if the network has been configured with forward mode="bridge" or "hostdev" . In those cases, the DCHP server is located elsewhere on the network, and is therefore not under control of libvirt. In this case the static IP entry would need to be made on the remote DHCP server. To do that see the documentation that is supplied with the server. Procedure 20.5. Setting a static IP address This procedure is performed on the host physical machine. Check the guest XML configuration file Display the guest's network configuration settings by running the virsh domiflist guest1 command. Substitute the name of your virtual machine in place of guest1 . A table is displayed. Look in the Source column. That is the name of your network. In this example the network is called default. This name will be used for the rest of the procedure as well as the MAC address. Verify the DHCP range The IP address that you set must be within the dhcp range that is specified for the network. In addition, it must also not conflict with any other existing static IP addresses on the network. To check the range of addresses available as well as addresses used, use the following command on the host machine: The output you see will differ from the example and you may see more lines and multiple host mac lines. Each guest static IP address will have one line. Set a static IP address Use the following command on the host machine, and replace default with the name of the network. The --live option allows this change to immediately take place and the --config option makes the change persistent. This command will also work for guest virtual machines that you have not yet created as long as you use a valid IP and MAC address. The MAC address should be a valid unicast MAC address (6 hexadecimal digit pairs separated by : , with the first digit pair being an even number); when libvirt creates a new random MAC address, it uses 52:54:00 for the first three digit pairs, and it is recommended to follow this convention. Restart the interface (optional) If the guest virtual machine is currently running, you will need to force the guest virtual machine to re-request a DHCP address. If the guest is not running, the new IP address will be implemented the time you start it. To restart the interface, enter the following commands on the host machine: This command makes the guest virtual machine's operating system think that the Ethernet cable has been unplugged, and then re-plugged after ten seconds. The sleep command is important because many DHCP clients allow for a short disconnect of the cable without re-requesting the IP address. Ten seconds is long enough so that the DHCP client forgets the old IP address and will request a new one once the up command is executed. If for some reason this command fails, you will have to reset the guest's interface from the guest operating system's management interface.
[ "virsh net-list", "virsh net-list Name State Autostart ----------------------------------------- default active yes vnet1 active yes vnet2 active yes", "virsh net-dumpxml NetworkName", "virsh net-dumpxml vnet1 <network> <name>vnet1</name> <uuid>98361b46-1581-acb7-1643-85a412626e70</uuid> <forward dev='eth0'/> <bridge name='vnet0' stp='on' forwardDelay='0' /> <ip address='192.168.100.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.100.128' end='192.168.100.254' /> </dhcp> </ip> </network>", "virsh net-autostart network [ --disable ]", "virsh net-create file", "virsh net-define file", "virsh net-destroy network", "virsh net-dumpxml network [ --inactive ]", "virsh net-edit network", "virsh net-info network", "virsh net-list [ --inactive | --all ] [ --persistent ] [< --transient >] [--autostart] [< --no-autostart >]", "virsh net-name network-UUID", "virsh net-uuid network-name", "virsh net-start network", "virsh net-undefine network", "virsh net-update network directive section XML [--parent-index index] [[--live] [--config] | [--current]]", "virsh domiflist guest1 Interface Type Source Model MAC ------------------------------------------------------- vnet4 network default virtio 52:54:00:48:27:1D", "virsh net-dumpxml default | egrep 'range|host\\ mac' <range start='198.51.100.2' end='198.51.100.254'/> <host mac='52:54:00:48:27:1C:1D' ip='198.51.100.2'/>", "virsh net-update default add ip-dhcp-host '<host mac=\"52:54:00:48:27:1D\" ip=\"198.51.100.3\"/>' --live --config", "virsh domif-setlink guest1 52:54:00:48:27:1D down sleep 10 virsh domif-setlink guest1 52:54:00:48:27:1D up" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guest_virtual_machines_with_virsh-managing_virtual_networks
9.3. Joystick support
9.3. Joystick support Joystick device support is not enabled by default. The Red Hat Enterprise Linux 6 kernel no longer provides the joystick module.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-system_monitoring_and_kernel-joystick_support
Chapter 26. Kubernetes NMState
Chapter 26. Kubernetes NMState 26.1. About the Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node's network interfaces to the API server. Important Red Hat supports the Kubernetes NMState Operator in production environments on bare-metal, IBM Power, IBM Z, and LinuxONE installations. Warning When using OVN-Kubernetes, changing the default gateway interface is not supported. Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator. Note The Kubernetes NMState Operator updates the network configuration of a secondary NIC. It cannot update the network configuration of the primary NIC or the br-ex bridge. OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify the network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster. Node networking is monitored and updated by the following objects: NodeNetworkState Reports the state of the network on that node. NodeNetworkConfigurationPolicy Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. NodeNetworkConfigurationEnactment Reports the network policies enacted upon each node. 26.1.1. Installing the Kubernetes NMState Operator You can install the Kubernetes NMState Operator by using the web console or the CLI. 26.1.1.1. Installing the Kubernetes NMState Operator using the web console You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Prerequisites You are logged in as a user with cluster-admin privileges. Procedure Select Operators OperatorHub . In the search field below All Items , enter nmstate and click Enter to search for the Kubernetes NMState Operator. Click on the Kubernetes NMState Operator search result. Click on Install to open the Install Operator window. Click Install to install the Operator. After the Operator finishes installing, click View Operator . Under Provided APIs , click Create Instance to open the dialog box for creating an instance of kubernetes-nmstate . In the Name field of the dialog box, ensure the name of the instance is nmstate. Note The name restriction is a known issue. The instance is a singleton for the entire cluster. Accept the default settings and click Create to create the instance. Summary Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. 26.1.1.2. Installing the Kubernetes NMState Operator using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI ( oc) . After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create the nmstate Operator namespace: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: labels: kubernetes.io/metadata.name: openshift-nmstate name: openshift-nmstate name: openshift-nmstate spec: finalizers: - kubernetes EOF Create the OperatorGroup : USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: NMState.v1.nmstate.io name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF Subscribe to the nmstate Operator: USD cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/kubernetes-nmstate-operator.openshift-nmstate: "" name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Create instance of the nmstate operator: USD cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF Verification Confirm that the deployment for the nmstate operator is running: oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase kubernetes-nmstate-operator.4.11.0-202208120157 Succeeded 26.2. Observing and updating the node network state and configuration 26.2.1. Viewing the network state of a node Node network state is the network configuration for all nodes in the cluster. A NodeNetworkState object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node. Procedure List all the NodeNetworkState objects in the cluster: USD oc get nns Inspect a NodeNetworkState object to view the network on that node. The output in this example has been redacted for clarity: USD oc get nns node01 -o yaml Example output apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: ... interfaces: ... route-rules: ... routes: ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 3 1 The name of the NodeNetworkState object is taken from the node. 2 The currentState contains the complete network configuration for the node, including DNS, interfaces, and routes. 3 Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report. 26.2.2. Managing policy by using the CLI 26.2.2.1. Creating an interface on nodes Create an interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The manifest details the requested configuration for the interface. By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the spec: nodeSelector parameter and the appropriate <key>:<value> for your node selector. You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the maxUnavailable field. Procedure Create the NodeNetworkConfigurationPolicy manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example, "10%" , or an absolute value (number), such as 3 . 5 Optional: Human-readable description for the interface. 6 Optional: Specifies the search and server settings for the DNS server. Create the node network policy: USD oc apply -f br1-eth1-policy.yaml 1 1 File name of the node network configuration policy manifest. Additional resources Example for creating multiple interfaces in the same policy Examples of different IP management methods in policies 26.2.3. Confirming node network policy updates on nodes A NodeNetworkConfigurationPolicy manifest describes your requested network configuration for nodes in the cluster. The node network policy includes your requested network configuration and the status of execution of the policy on the cluster as a whole. When you apply a node network policy, a NodeNetworkConfigurationEnactment object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting. Procedure To confirm that a policy has been applied to the cluster, list the policies and their status: USD oc get nncp Optional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy: USD oc get nncp <policy> -o yaml Optional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster: USD oc get nnce Optional: To view the configuration of a particular enactment, including any error reporting for a failed configuration: USD oc get nnce <node>.<policy> -o yaml 26.2.4. Removing an interface from nodes You can remove an interface from one or more nodes in the cluster by editing the NodeNetworkConfigurationPolicy object and setting the state of the interface to absent . Removing an interface from a node does not automatically restore the node network configuration to a state. If you want to restore the state, you will need to define that node network configuration in the policy. If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a down state and become unreachable. To avoid losing connectivity, configure the node NIC in the same policy so that it has a status of up and either DHCP or a static IP address. Note Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a NodeNetworkConfigurationPolicy is an object in the cluster, it only represents the requested configuration. Similarly, removing an interface does not delete the policy. Procedure Update the NodeNetworkConfigurationPolicy manifest used to create the interface. The following example removes a Linux bridge and configures the eth1 NIC with DHCP to avoid losing connectivity: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Changing the state to absent removes the interface. 5 The name of the interface that is to be unattached from the bridge interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. Update the policy on the node and remove the interface: USD oc apply -f <br1-eth1-policy.yaml> 1 1 File name of the policy manifest. 26.2.5. Example policy configurations for different interfaces 26.2.5.1. Example: Linux bridge interface node network configuration policy Create a Linux bridge interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bridge. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 Disables stp in this example. 11 The node NIC to which the bridge attaches. 26.2.5.2. Example: VLAN interface node network configuration policy Create a VLAN interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a VLAN. 7 The requested state for the interface after creation. 8 The node NIC to which the VLAN is attached. 9 The VLAN tag. 26.2.5.3. Example: Bond interface node network configuration policy Create a bond interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Note OpenShift Container Platform only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad mode=5 balance-tlb mode=6 balance-alb The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bond. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 The driver mode for the bond. This example uses an active backup mode. 11 Optional: This example uses miimon to inspect the bond link every 140ms. 12 The subordinate node NICs in the bond. 13 Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to 1500 by default. 26.2.5.4. Example: Ethernet interface node network configuration policy Configure an Ethernet interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 26.2.5.5. Example: Multiple interfaces in the same node network configuration policy You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest. The following example snippet creates a bond that is named bond10 across two NICs and a Linux bridge that is named br1 that connects to the bond. #... interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: port: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10 #... 26.2.6. Capturing the static IP of a NIC attached to a bridge Important Capturing the static IP of a NIC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 26.2.6.1. Example: Linux bridge interface node network configuration policy to inherit static IP address from the NIC attached to the bridge Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" capture: eth1-nic: interfaces.name=="eth1" 3 eth1-routes: routes.running.-hop-interface=="eth1" br1-routes: capture.eth1-routes | routes.running.-hop-interface := "br1" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: "{{ capture.br1-routes.routes.running }}" 1 The name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 3 The reference to the node NIC to which the bridge attaches. 4 The type of interface. This example creates a bridge. 5 The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the spec.capture.eth1-nic entry. 6 The node NIC to which the bridge attaches. Additional resources The NMPolicy project - Policy syntax 26.2.7. Examples: IP management The following example configuration snippets demonstrate different methods of IP management. These examples use the ethernet interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types. 26.2.7.1. Static The following snippet statically configures an IP address on the Ethernet interface: ... interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true ... 1 Replace this value with the static IP address for the interface. 26.2.7.2. No IP address The following snippet ensures that the interface has no IP address: ... interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false ... 26.2.7.3. Dynamic host configuration The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS: ... interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true ... The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS: ... interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true ... 26.2.7.4. DNS Setting the DNS configuration is analagous to modifying the /etc/resolv.conf file. The following snippet sets the DNS configuration on the host. ... interfaces: 1 ... ipv4: ... auto-dns: false ... dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8 ... 1 You must configure an interface with auto-dns: false or you must use static IP configuration on an interface in order for Kubernetes NMState to store custom DNS settings. Important You cannot use br-ex , an OVNKubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers. 26.2.7.5. Static routing The following snippet configures a static route and a static IP on interface eth1 . ... interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 -hop-address: 192.0.2.1 2 -hop-interface: eth1 table-id: 254 ... 1 The static IP address for the Ethernet interface. 2 hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface. 26.3. Troubleshooting node network configuration If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as: The configuration fails to be applied on the host. The host loses connection to the default gateway. The host loses connection to the API server. 26.3.1. Troubleshooting an incorrect node network configuration policy configuration You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you apply an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. In this example, a Linux bridge policy is applied to an example cluster that has three control plane nodes and three compute nodes. The policy fails to be applied because it references an incorrect interface. To find the error, investigate the available NMState resources. You can then update the policy with the correct configuration. Procedure Create a policy and apply it to your cluster. The following example creates a simple bridge on the ens01 interface: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01 USD oc apply -f ens01-bridge-testfail.yaml Example output nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created Verify the status of the policy by running the following command: USD oc get nncp The output shows that the policy failed: Example output NAME STATUS ens01-bridge-testfail FailedToConfigure However, the policy status alone does not indicate if it failed on all nodes or a subset of nodes. List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, it suggests that the problem is with a specific node configuration. If the policy failed on all nodes, it suggests that the problem is with the policy. USD oc get nnce The output shows that the policy failed on all nodes: Example output NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure View one of the failed enactments and look at the traceback. The following command uses the output tool jsonpath to filter the output: USD oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}' This command returns a large traceback that has been edited for brevity: Example output error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' ... libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\n current_state.interfaces[ifname],\nlibnmstate.error.NmstateVerificationError: The NmstateVerificationError lists the desired policy configuration, the current configuration of the policy on the node, and the difference highlighting the parameters that do not match. In this example, the port is included in the difference , which suggests that the problem is the port configuration in the policy. To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the NodeNetworkState object. The following command returns the network configuration for the control-plane-1 node: The output shows that the interface name on the nodes is ens1 but the failed policy incorrectly uses ens01 : Example output - ipv4: ... name: ens1 state: up type: ethernet Correct the error by editing the existing policy: USD oc edit nncp ens01-bridge-testfail ... port: - name: ens1 Save the policy to apply the correction. Check the status of the policy to ensure it updated successfully: USD oc get nncp Example output NAME STATUS ens01-bridge-testfail SuccessfullyConfigured The updated policy is successfully configured on all nodes in the cluster.
[ "cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: labels: kubernetes.io/metadata.name: openshift-nmstate name: openshift-nmstate name: openshift-nmstate spec: finalizers: - kubernetes EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: NMState.v1.nmstate.io name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF", "cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/kubernetes-nmstate-operator.openshift-nmstate: \"\" name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF", "get clusterserviceversion -n openshift-nmstate -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase kubernetes-nmstate-operator.4.11.0-202208120157 Succeeded", "oc get nns", "oc get nns node01 -o yaml", "apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8", "oc apply -f br1-eth1-policy.yaml 1", "oc get nncp", "oc get nncp <policy> -o yaml", "oc get nnce", "oc get nnce <node>.<policy> -o yaml", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "oc apply -f <br1-eth1-policy.yaml> 1", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "# interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: port: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10 #", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" capture: eth1-nic: interfaces.name==\"eth1\" 3 eth1-routes: routes.running.next-hop-interface==\"eth1\" br1-routes: capture.eth1-routes | routes.running.next-hop-interface := \"br1\" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: \"{{ capture.eth1-nic.interfaces.0.ipv4 }}\" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: \"{{ capture.br1-routes.routes.running }}\"", "interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true", "interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false", "interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true", "interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true", "interfaces: 1 ipv4: auto-dns: false dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8", "interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01", "oc apply -f ens01-bridge-testfail.yaml", "nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created", "oc get nncp", "NAME STATUS ens01-bridge-testfail FailedToConfigure", "oc get nnce", "NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure", "oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'", "error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\\n current_state.interfaces[ifname],\\nlibnmstate.error.NmstateVerificationError:", "oc get nns control-plane-1 -o yaml", "- ipv4: name: ens1 state: up type: ethernet", "oc edit nncp ens01-bridge-testfail", "port: - name: ens1", "oc get nncp", "NAME STATUS ens01-bridge-testfail SuccessfullyConfigured" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/networking/kubernetes-nmstate
Chapter 6. Re-enabling accounts that reached the inactivity limit
Chapter 6. Re-enabling accounts that reached the inactivity limit If Directory Server inactivated an account because it reached the inactivity limit, an administrator can re-enable the account. 6.1. Re-enabling accounts inactivated by the Account Policy plug-in You can re-enable accounts using the dsconf account unlock command or by manually updating the lastLoginTime attribute of the inactivated user. Prerequisites An inactivated user account. Procedure Reactivate the account using one of the following methods: Using the dsconf account unlock command: # dsidm -D "cn=Directory manager" ldap://server.example.com -b " dc=example,dc=com " account unlock " uid=example,ou=People,dc=example,dc=com " By setting the lastLoginTime attribute of the user to a recent time stamp: # ldapmodify -H ldap://server.example.com -x -D " cn=Directory Manager " -W dn: uid=example,ou=People,dc=example,dc=com changetype: modify replace: lastLoginTime lastLoginTime: 20210901000000Z Verification Authenticate as the user that you have reactivated. For example, perform a search: # ldapsearch -H ldap://server.example.com -x -D " uid=example,ou=People,dc=example,dc=com " -W -b " dc=example,dc=com -s base" If the user can successfully authenticate, the account was reactivated.
[ "dsidm -D \"cn=Directory manager\" ldap://server.example.com -b \" dc=example,dc=com \" account unlock \" uid=example,ou=People,dc=example,dc=com \"", "ldapmodify -H ldap://server.example.com -x -D \" cn=Directory Manager \" -W dn: uid=example,ou=People,dc=example,dc=com changetype: modify replace: lastLoginTime lastLoginTime: 20210901000000Z", "ldapsearch -H ldap://server.example.com -x -D \" uid=example,ou=People,dc=example,dc=com \" -W -b \" dc=example,dc=com -s base\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/managing_access_control/assembly_re-enabling-accounts-that-reached-the-inactivity-limit_managing-access-control
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.14/making-open-source-more-inclusive
Chapter 1. Support overview
Chapter 1. Support overview Red Hat offers cluster administrators tools for gathering data for your cluster, monitoring, and troubleshooting. 1.1. Get support Get support : Visit the Red Hat Customer Portal to review knowledge base articles, submit a support case, and review additional product documentation and resources. 1.2. Remote health monitoring issues Remote health monitoring issues : OpenShift Container Platform collects telemetry and configuration data about your cluster and reports it to Red Hat by using the Telemeter Client and the Insights Operator. Red Hat uses this data to understand and resolve issues in connected cluster . Similar to connected clusters, you can Use remote health monitoring in a restricted network . OpenShift Container Platform collects data and monitors health using the following: Telemetry : The Telemetry Client gathers and uploads the metrics values to Red Hat every four minutes and thirty seconds. Red Hat uses this data to: Monitor the clusters. Roll out OpenShift Container Platform upgrades. Improve the upgrade experience. Insight Operator : By default, OpenShift Container Platform installs and enables the Insight Operator, which reports configuration and component failure status every two hours. The Insight Operator helps to: Identify potential cluster issues proactively. Provide a solution and preventive action in Red Hat OpenShift Cluster Manager. You can Review telemetry information . If you have enabled remote health reporting, Use Insights to identify issues . You can optionally disable remote health reporting. 1.3. Gather data about your cluster Gather data about your cluster : Red Hat recommends gathering your debugging information when opening a support case. This helps Red Hat Support to perform a root cause analysis. A cluster administrator can use the following to gather data about your cluster: The must-gather tool : Use the must-gather tool to collect information about your cluster and to debug the issues. sosreport : Use the sosreport tool to collect configuration details, system information, and diagnostic data for debugging purposes. Cluster ID : Obtain the unique identifier for your cluster, when providing information to Red Hat Support. Bootstrap node journal logs : Gather bootkube.service journald unit logs and container logs from the bootstrap node to troubleshoot bootstrap-related issues. Cluster node journal logs : Gather journald unit logs and logs within /var/log on individual cluster nodes to troubleshoot node-related issues. A network trace : Provide a network packet trace from a specific OpenShift Container Platform cluster node or a container to Red Hat Support to help troubleshoot network-related issues. Diagnostic data : Use the redhat-support-tool command to gather(?) diagnostic data about your cluster. 1.4. Troubleshooting issues A cluster administrator can monitor and troubleshoot the following OpenShift Container Platform component issues: Installation issues : OpenShift Container Platform installation proceeds through various stages. You can perform the following: Monitor the installation stages. Determine at which stage installation issues occur. Investigate multiple installation issues. Gather logs from a failed installation. Node issues : A cluster administrator can verify and troubleshoot node-related issues by reviewing the status, resource usage, and configuration of a node. You can query the following: Kubelet's status on a node. Cluster node journal logs. Crio issues : A cluster administrator can verify CRI-O container runtime engine status on each cluster node. If you experience container runtime issues, perform the following: Gather CRI-O journald unit logs. Cleaning CRI-O storage. Operating system issues : OpenShift Container Platform runs on Red Hat Enterprise Linux CoreOS. If you experience operating system issues, you can investigate kernel crash procedures. Ensure the following: Enable kdump. Test the kdump configuration. Analyze a core dump. Network issues : To troubleshoot Open vSwitch issues, a cluster administrator can perform the following: Configure the Open vSwitch log level temporarily. Configure the Open vSwitch log level permanently. Display Open vSwitch logs. Operator issues : A cluster administrator can do the following to resolve Operator issues: Verify Operator subscription status. Check Operator pod health. Gather Operator logs. Pod issues : A cluster administrator can troubleshoot pod-related issues by reviewing the status of a pod and completing the following: Review pod and container logs. Start debug pods with root access. Source-to-image issues : A cluster administrator can observe the S2I stages to determine where in the S2I process a failure occurred. Gather the following to resolve Source-to-Image (S2I) issues: Source-to-Image diagnostic data. Application diagnostic data to investigate application failure. Storage issues : A multi-attach storage error occurs when the mounting volume on a new node is not possible because the failed node cannot unmount the attached volume. A cluster administrator can do the following to resolve multi-attach storage issues: Enable multiple attachments by using RWX volumes. Recover or delete the failed node when using an RWO volume. Monitoring issues : A cluster administrator can follow the procedures on the troubleshooting page for monitoring. If the metrics for your user-defined projects are unavailable or if Prometheus is consuming a lot of disk space, check the following: Investigate why user-defined metrics are unavailable. Determine why Prometheus is consuming a lot of disk space. Logging issues : A cluster administrator can follow the procedures on the troubleshooting page for OpenShift Logging issues. Check the following to resolve logging issues: Status of the Logging Operator . Status of the Log store . OpenShift Logging alerts . Information about your OpenShift logging environment using oc adm must-gather command . OpenShift CLI (oc) issues : Investigate OpenShift CLI (oc) issues by increasing the log level.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/support/support-overview
4.5. Request Level Transactions
4.5. Request Level Transactions Request level transactions are used when the request is not in the scope of a global or local transaction, which implies autoCommit is true . In a request level transaction, your application does not need to explicitly call commit or rollback, rather every command is assumed to be its own transaction that will automatically be committed or rolled back by the server. JBoss Data Virtualization can perform updates through virtual tables. These updates might result in an update against multiple physical systems, even though the application issues the update command against a single virtual table. Often, a user might not know whether the queried tables actually update multiple sources and require a transaction. For that reason, JBoss Data Virtualization allows your application to automatically wrap commands in transactions when necessary. Because this wrapping incurs a performance penalty for your queries, you can choose from a number of available wrapping modes to suit your environment. You need to choose between the highest degree of integrity and performance your application needs. For example, if your data sources are not transaction-compliant, you might turn transaction wrapping off to maximize performance.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/request_level_transactions1
Chapter 1. Architecture overview
Chapter 1. Architecture overview OpenShift Container Platform is a cloud-based Kubernetes container platform. The foundation of OpenShift Container Platform is based on Kubernetes and therefore shares the same technology. To learn more about OpenShift Container Platform and Kubernetes, see product architecture . 1.1. Glossary of common terms for OpenShift Container Platform architecture This glossary defines common terms that are used in the architecture content. These terms help you understand OpenShift Container Platform architecture effectively. access policies A set of roles that dictate how users, applications, and entities within a cluster interacts with one another. An access policy increases cluster security. admission plugins Admission plugins enforce security policies, resource limitations, or configuration requirements. authentication To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate to the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. bootstrap A temporary machine that runs minimal Kubernetes and deploys the OpenShift Container Platform control plane. certificate signing requests (CSRs) A resource requests a denoted signer to sign a certificate. This request might get approved or denied. Cluster Version Operator (CVO) An Operator that checks with the OpenShift Container Platform Update Service to see the valid updates and update paths based on current component versions and information in the graph. compute nodes Nodes that are responsible for executing workloads for cluster users. Compute nodes are also known as worker nodes. configuration drift A situation where the configuration on a node does not match what the machine config specifies. containers Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers anywhere, from a data center to a public or private cloud to your local host. container orchestration engine Software that automates the deployment, management, scaling, and networking of containers. container workloads Applications that are packaged and deployed in containers. control groups (cgroups) Partitions sets of processes into groups to manage and limit the resources processes consume. control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the life cycle of containers. Control planes are also known as control plane machines. CRI-O A Kubernetes native container runtime implementation that integrates with the operating system to deliver an efficient Kubernetes experience. deployment A Kubernetes resource object that maintains the life cycle of an application. Dockerfile A text file that contains the user commands to perform on a terminal to assemble the image. hosted control planes A OpenShift Container Platform feature that enables hosting a control plane on the OpenShift Container Platform cluster from its data plane and workers. This model performs following actions: Optimize infrastructure costs required for the control planes. Improve the cluster creation time. Enable hosting the control plane using the Kubernetes native high level primitives. For example, deployments, stateful sets. Allow a strong network segmentation between the control plane and workloads. hybrid cloud deployments Deployments that deliver a consistent platform across bare metal, virtual, private, and public cloud environments. This offers speed, agility, and portability. Ignition A utility that RHCOS uses to manipulate disks during initial configuration. It completes common disk tasks, including partitioning disks, formatting partitions, writing files, and configuring users. installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. kubernetes manifest Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemon sets. Machine Config Daemon (MCD) A daemon that regularly checks the nodes for configuration drift. Machine Config Operator (MCO) An Operator that applies the new configuration to your cluster machines. machine config pools (MCP) A group of machines, such as control plane components or user workloads, that are based on the resources that they handle. metadata Additional information about cluster deployment artifacts. microservices An approach to writing software. Applications can be separated into the smallest components, independent from each other by using microservices. mirror registry A registry that holds the mirror of OpenShift Container Platform images. monolithic applications Applications that are self-contained, built, and packaged as a single piece. namespaces A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. networking Network information of OpenShift Container Platform cluster. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OpenShift Container Platform Update Service (OSUS) For clusters with internet access, Red Hat Enterprise Linux (RHEL) provides over-the-air updates by using an OpenShift Container Platform update service as a hosted service located behind public APIs. OpenShift CLI ( oc ) A command line tool to run OpenShift Container Platform commands on the terminal. OpenShift Dedicated A managed RHEL OpenShift Container Platform offering on Amazon Web Services (AWS) and Google Cloud Platform (GCP). OpenShift Dedicated focuses on building and scaling applications. OpenShift Container Platform registry A registry provided by OpenShift Container Platform to manage images. Operator The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. OperatorHub A platform that contains various OpenShift Container Platform Operators to install. Operator Lifecycle Manager (OLM) OLM helps you to install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way. over-the-air (OTA) updates The OpenShift Container Platform Update Service (OSUS) provides over-the-air updates to OpenShift Container Platform, including Red Hat Enterprise Linux CoreOS (RHCOS). pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. private registry OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their private container images. public registry OpenShift Container Platform can use any server implementing the container image registry API as a source of the image which allows the developers to push and pull their public container images. RHEL OpenShift Container Platform Cluster Manager A managed service where you can install, modify, operate, and upgrade your OpenShift Container Platform clusters. RHEL Quay Container Registry A Quay.io container registry that serves most of the container images and Operators to OpenShift Container Platform clusters. replication controllers An asset that indicates how many pod replicas are required to run at a time. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have only access to resources required to execute their roles. route Routes expose a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance. scaling The increasing or decreasing of resource capacity. service A service exposes a running application on a set of pods. Source-to-Image (S2I) image An image created based on the programming language of the application source code in OpenShift Container Platform to deploy applications. storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Telemetry A component to collect information such as size, health, and status of OpenShift Container Platform. template A template describes a set of objects that can be parameterized and processed to produce a list of objects for creation by OpenShift Container Platform. user-provisioned infrastructure You can install OpenShift Container Platform on the infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. web console A user interface (UI) to manage OpenShift Container Platform. worker node Nodes that are responsible for executing workloads for cluster users. Worker nodes are also known as compute nodes. Additional resources For more information on networking, see OpenShift Container Platform networking . For more information on storage, see OpenShift Container Platform storage . For more information on authentication, see OpenShift Container Platform authentication . For more information on Operator Lifecycle Manager (OLM), see OLM . For more information on logging, see OpenShift Container Platform Logging . For more information on over-the-air (OTA) updates, see Updating OpenShift Container Platform clusters . 1.2. About installation and updates As a cluster administrator, you can use the OpenShift Container Platform installation program to install and deploy a cluster by using one of the following methods: Installer-provisioned infrastructure User-provisioned infrastructure 1.3. About the control plane The control plane manages the worker nodes and the pods in your cluster. You can configure nodes with the use of machine config pools (MCPs). MCPs are groups of machines, such as control plane components or user workloads, that are based on the resources that they handle. OpenShift Container Platform assigns different roles to hosts. These roles define the function of a machine in a cluster. The cluster contains definitions for the standard control plane and worker role types. You can use Operators to package, deploy, and manage services on the control plane. Operators are important components in OpenShift Container Platform because they provide the following services: Perform health checks Provide ways to watch applications Manage over-the-air updates Ensure applications stay in the specified state 1.4. About containerized applications for developers As a developer, you can use different tools, methods, and formats to develop your containerized application based on your unique requirements, for example: Use various build-tool, base-image, and registry options to build a simple container application. Use supporting components such as OperatorHub and templates to develop your application. Package and deploy your application as an Operator. You can also create a Kubernetes manifest and store it in a Git repository. Kubernetes works on basic units called pods. A pod is a single instance of a running process in your cluster. Pods can contain one or more containers. You can create a service by grouping a set of pods and their access policies. Services provide permanent internal IP addresses and host names for other applications to use as pods are created and destroyed. Kubernetes defines workloads based on the type of your application. 1.5. About Red Hat Enterprise Linux CoreOS (RHCOS) and Ignition As a cluster administrator, you can perform the following Red Hat Enterprise Linux CoreOS (RHCOS) tasks: Learn about the generation of single-purpose container operating system technology . Choose how to configure Red Hat Enterprise Linux CoreOS (RHCOS) Choose how to deploy Red Hat Enterprise Linux CoreOS (RHCOS): Installer-provisioned deployment User-provisioned deployment The OpenShift Container Platform installation program creates the Ignition configuration files that you need to deploy your cluster. Red Hat Enterprise Linux CoreOS (RHCOS) uses Ignition during the initial configuration to perform common disk tasks, such as partitioning, formatting, writing files, and configuring users. During the first boot, Ignition reads its configuration from the installation media or the location that you specify and applies the configuration to the machines. You can learn how Ignition works , the process for a Red Hat Enterprise Linux CoreOS (RHCOS) machine in an OpenShift Container Platform cluster, view Ignition configuration files, and change Ignition configuration after an installation. 1.6. About admission plugins You can use admission plugins to regulate how OpenShift Container Platform functions. After a resource request is authenticated and authorized, admission plugins intercept the resource request to the master API to validate resource requests and to ensure that scaling policies are adhered to. Admission plugins are used to enforce security policies, resource limitations, or configuration requirements.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/architecture/architecture-overview
Chapter 9. Adding more RHEL compute machines to an OpenShift Container Platform cluster
Chapter 9. Adding more RHEL compute machines to an OpenShift Container Platform cluster If your OpenShift Container Platform cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it. 9.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.7, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines, which are also known as worker machines, in your cluster if you use a user-provisioned infrastructure installation. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane, or master, machines in your cluster. As with all installations that use user-provisioned infrastructure, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Important Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. You must add any RHEL compute machines to the cluster after you initialize the control plane. 9.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute, or worker, machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10 percent for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base OS: RHEL 7.9 with "Minimal" installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is deprecated. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. In addition, you must not upgrade your compute machines to RHEL 8 because support is not available in this release. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Enabling FIPS Mode in the RHEL 7 documentation. Important The use of FIPS Validated / Modules in Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 architecture. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing the system's temporary directory. The system's temporary directory is determined according to the rules defined in the tempfile module in Python's standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=true attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow the system access to the cluster's API service endpoints. 9.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 9.3. Preparing an image for your cloud Amazon Machine Images (AMI) are required since various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You must list the AMI IDs so that the correct RHEL version needed for the compute machines is selected. 9.3.1. Listing latest available RHEL images on AWS AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The AWS Command Line Interface (CLI) is used to list the available Red Hat Enterprise Linux (RHEL) image IDs. Prerequisites You have installed the AWS CLI. Procedure Use this command to list RHEL 7.9 Amazon Machine Images (AMI): USD aws ec2 describe-images --owners 309956199498 \ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ 2 --filters "Name=name,Values=RHEL-7.9*" \ 3 --region us-east-1 \ 4 --output table 5 1 The --owners command option shows Red Hat images based on the account ID 309956199498 . Important This account ID is required to display AMI IDs for images that are provided by Red Hat. 2 The --query command option sets how the images are sorted with the parameters 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' . In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. 3 The --filter command option sets which version of RHEL is shown. In this example, since the filter is set by "Name=name,Values=RHEL-7.9*" , then RHEL 7.9 AMIs are shown. 4 The --region command option sets where the region where an AMI is stored. 5 The --output command option sets how the results are displayed. Note When creating a RHEL compute machine for AWS, ensure that the AMI is RHEL 7.9. Example output ---------------------------------------------------------------------------------------------------------- | DescribeImages | +---------------------------+----------------------------------------------------+-----------------------+ | 2020-05-13T09:50:36.000Z | RHEL-7.9_HVM_BETA-20200422-x86_64-0-Hourly2-GP2 | ami-038714142142a6a64 | | 2020-09-18T07:51:03.000Z | RHEL-7.9_HVM_GA-20200917-x86_64-0-Hourly2-GP2 | ami-005b7876121b7244d | | 2021-02-09T09:46:19.000Z | RHEL-7.9_HVM-20210208-x86_64-0-Hourly2-GP2 | ami-030e754805234517e | +---------------------------+----------------------------------------------------+-----------------------+ Additional resources You may also manually import RHEL images to AWS . 9.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.7: # subscription-manager repos \ --enable="rhel-7-server-rpms" \ --enable="rhel-7-fast-datapath-rpms" \ --enable="rhel-7-server-extras-rpms" \ --enable="rhel-7-server-optional-rpms" \ --enable="rhel-7-server-ose-4.7-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 9.5. Attaching the role permissions to RHEL instance in AWS Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. Procedure From the AWS IAM console, create your desired IAM role . Attach the IAM role to the desired worker node. Additional resources See Required AWS permissions for IAM roles . 9.6. Tagging a RHEL worker node as owned or shared A cluster uses the value of the kubernetes.io/cluster/<clusterid>,Value=(owned|shared) tag to determine the lifetime of the resources related to the AWS cluster. The owned tag value should be added if the resource should be destroyed as part of destroying the cluster. The shared tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. Procedure With RHEL compute machines, the RHEL worker instance must be tagged with kubernetes.io/cluster/<clusterid>=owned or kubernetes.io/cluster/<cluster-id>=shared . Note Do not tag all existing security groups with the kubernetes.io/cluster/<name>,Value=<clusterid> tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. 9.7. Adding more RHEL compute machines to your cluster You can add more compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.7 cluster. Prerequisites Your OpenShift Container Platform cluster already contains RHEL compute nodes. The hosts file that you used to add the first RHEL compute machines to your cluster is on the machine that you use the run the playbook. The machine that you run the playbook on must be able to access all of the RHEL hosts. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. The kubeconfig file for the cluster and the installation program that you used to install the cluster are on the machine that you use the run the playbook. You must prepare the RHEL hosts for installation. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. If you use SSH key-based authentication, you must manage the key with an SSH agent. Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Procedure Open the Ansible inventory file at /<path>/inventory/hosts that defines your compute machine hosts and required variables. Rename the [new_workers] section of the file to [workers] . Add a [new_workers] section to the file and define the fully-qualified domain names for each new host. The file resembles the following example: In this example, the mycluster-rhel7-0.example.com and mycluster-rhel7-1.example.com machines are in the cluster and you add the mycluster-rhel7-2.example.com and mycluster-rhel7-3.example.com machines. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the scaleup playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 9.8. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. Once the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 9.9. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Paramter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file.
[ "aws ec2 describe-images --owners 309956199498 \\ 1 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \\ 2 --filters \"Name=name,Values=RHEL-7.9*\" \\ 3 --region us-east-1 \\ 4 --output table 5", "---------------------------------------------------------------------------------------------------------- | DescribeImages | +---------------------------+----------------------------------------------------+-----------------------+ | 2020-05-13T09:50:36.000Z | RHEL-7.9_HVM_BETA-20200422-x86_64-0-Hourly2-GP2 | ami-038714142142a6a64 | | 2020-09-18T07:51:03.000Z | RHEL-7.9_HVM_GA-20200917-x86_64-0-Hourly2-GP2 | ami-005b7876121b7244d | | 2021-02-09T09:46:19.000Z | RHEL-7.9_HVM-20210208-x86_64-0-Hourly2-GP2 | ami-030e754805234517e | +---------------------------+----------------------------------------------------+-----------------------+", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-7-server-rpms\" --enable=\"rhel-7-fast-datapath-rpms\" --enable=\"rhel-7-server-extras-rpms\" --enable=\"rhel-7-server-optional-rpms\" --enable=\"rhel-7-server-ose-4.7-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root #ansible_become=True openshift_kubeconfig_path=\"~/.kube/config\" [workers] mycluster-rhel7-0.example.com mycluster-rhel7-1.example.com [new_workers] mycluster-rhel7-2.example.com mycluster-rhel7-3.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.20.0 master-1 Ready master 63m v1.20.0 master-2 Ready master 64m v1.20.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.20.0 master-1 Ready master 73m v1.20.0 master-2 Ready master 74m v1.20.0 worker-0 Ready worker 11m v1.20.0 worker-1 Ready worker 11m v1.20.0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/machine_management/more-rhel-compute
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/making-open-source-more-inclusive
14.8.13. smbpasswd
14.8.13. smbpasswd smbpasswd <options> <username> <password> The smbpasswd program manages encrypted passwords. This program can be run by a superuser to change any user's password as well as by an ordinary user to change their own Samba password.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-smbpasswd
Chapter 3. Creating the Overcloud
Chapter 3. Creating the Overcloud The creation of an Overcloud that uses IPv6 networking requires additional arguments for the openstack overcloud deploy command. For example: The above command uses the following options: --templates - Creates the Overcloud from the default Heat template collection. -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6. -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it is an environment file that initializes network isolation configuration for IPv6. -e /home/stack/network-environment.yaml - Adds an additional environment file to the Overcloud deployment. In this case, it includes overrides related to IPv6. Ensure that network_data.yaml includes the setting ipv6: true . versions of Red Hat OpenStack director, included two routes: one for IPv6 on the External network (default) and one for IPv4 on the Control Plane. To use both default routes, ensure that the controller definition in roles_data.yaml contains both networks in default_route_networks (for example, default_route_networks: ['External', 'ControlPlane'] ). --ntp-server pool.ntp.org - Sets our NTP server. The Overcloud creation process begins and the director provisions your nodes. This process takes some time to complete. To view the status of the Overcloud creation, open a separate terminal as the stack user and run: 3.1. Accessing the Overcloud The director generates a script to configure and help authenticate interactions with your Overcloud from the director host. The director saves this file ( overcloudrc ) in your stack user's home directory. Run the following command to use this file: This loads the necessary environment variables to interact with your Overcloud from the director host's CLI. To return to interacting with the director's host, run the following command:
[ "openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/templates/network-environment.yaml --ntp-server pool.ntp.org [ADDITIONAL OPTIONS]", "source ~/stackrc heat stack-list --show-nested", "source ~/overcloudrc", "source ~/stackrc" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/ipv6_networking_for_the_overcloud/creating_the_overcloud
Chapter 4. Multi-service signup
Chapter 4. Multi-service signup By the end of this section, you will be familiar with the procedure to create and customize a multiple-service signup page. If you are using the multiple services functionality, you are able to customize the signup procedure to allow customers to subscribe to different services. 4.1. Prequisites You should be familiar with layout and page creation procedures as well as with the basics of Liquid formatting tags. For more details about liquid tags, see Liquid reference . "Multiple Service" functionality must also be enabled on your account (available for Pro plan and up). It is strongly recommend that you read about signup workflows , so you will have the whole setup prepared and know how it works. 4.2. Introduction Start the process by creating a new layout, which will serve as the template for your multi-service signup page. Go into the Layouts section of the CMS system, and create the new layout. You can call it multipleservicesignup to be able to easily distinguish it from the other layouts. In the editor, paste the general structure of your standard layout (such as home or main layout). Now delete everything you do not need - all the containers, sidebars, additional boxes, etc . Having created the backbone of your layout, proceed to customizing the code for signup. 4.3. Multi-service signup 4.3.1. Retrieving information about services In order to retrieve all the information about the services that you need to construct the proper signup link, you have to loop through the service objects. Services are a part of the model object. 4.3.2. Configuring the signup columns You already have your layout and loop accessing the service objects. Now decide how you want to display information about the service and the signup link. For example, divide them into columns with a service description and a signup link at the bottom. Every column will be a div box with a service-column class to contain all the necessary information. The container inside serves as a custom description field. service.name is the service name, which in this case will be the container's name. 4.3.3. Configuring the subscription Now the main part of your custom service signup - to create the signup link, extract the signup URL and the service ID. Take the signup URL from URL's object and the service ID from your service object on which you iterate in the loop. The final link code will look like this: You also have to take into account that the user may already have signed up for some of your services. Create a conditional block to check. With this, you can generate the final code: 4.3.4. Styling Add some final touches to the generated markup, depending on the number of services you have. In the case of this example it is two, so the CSS code for the service-column div will be: In the example, we have used the percentage-based layout to dynamically assign the width of the column basic on the containing div's dimensions. Now you should have a properly working and good-looking multiple services subscription page. Congratulations! If you would like to display the columns in a specific order, try using conditional expressions (if/else/case) conditioning the service name or another value you know.
[ "{% for service in provider.services %} . . . {% endfor %}", "{% for service in provider.services %} <div class=\"service-column\"> <p>{{ service.name }}</p> <p>{{ service.description }}</p> . . . </div> {% endfor %}", "<a href=\"{{ urls.signup }}?{{ service | toparam }}\">Signup to {{ service.name }}</a>", "{% unless service.subscribed? %} <a href=\"{{ urls.signup }}?{{ service | toparam }}\">Signup to {{ service.name }}</a> {% endunless %}", "{% for service in provider.services %} <div class=\"service-column\"> <p>{{ service.name }}</p> <p>{{ service.description }}</p> {% unless service.subscribed? %} <a href=\"{{ urls.signup }}?{{ service | to_param }}\">Signup to {{ service.name }}</a> {% endunless %} </div> {% endfor %}", ".service-column { float: left; margin-left: 10%; width: 45%; } .service-column:first-child { margin-left: 0; }" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/creating_the_developer_portal/multi-service-signup
Chapter 11. Managing Red Hat Gluster Storage Volumes
Chapter 11. Managing Red Hat Gluster Storage Volumes This chapter describes how to perform common volume management operations on the Red Hat Gluster Storage volumes. 11.1. Configuring Volume Options Note Volume options can be configured while the trusted storage pool is online. The current settings for a volume can be viewed using the following command: Volume options can be configured using the following command: For example, to specify the performance cache size for test-volume : Volume options can be reset using the following command: For example, to reset the changelog option for test-volume :
[ "gluster volume info VOLNAME", "gluster volume set VOLNAME OPTION PARAMETER", "gluster volume set test-volume performance.cache-size 256MB volume set: success", "gluster volume reset VOLNAME OPTION_NAME", "gluster volume reset test-volume changelog volume set: success" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/chap-Managing_Red_Hat_Storage_Volumes
Chapter 1. Common object reference
Chapter 1. Common object reference 1.1. com.coreos.monitoring.v1.AlertmanagerList schema Description AlertmanagerList is a list of Alertmanager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Alertmanager) List of alertmanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.2. com.coreos.monitoring.v1.PodMonitorList schema Description PodMonitorList is a list of PodMonitor Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodMonitor) List of podmonitors. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.3. com.coreos.monitoring.v1.ProbeList schema Description ProbeList is a list of Probe Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Probe) List of probes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.4. com.coreos.monitoring.v1.PrometheusList schema Description PrometheusList is a list of Prometheus Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Prometheus) List of prometheuses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.5. com.coreos.monitoring.v1.PrometheusRuleList schema Description PrometheusRuleList is a list of PrometheusRule Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PrometheusRule) List of prometheusrules. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.6. com.coreos.monitoring.v1.ServiceMonitorList schema Description ServiceMonitorList is a list of ServiceMonitor Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceMonitor) List of servicemonitors. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.7. com.coreos.monitoring.v1.ThanosRulerList schema Description ThanosRulerList is a list of ThanosRuler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ThanosRuler) List of thanosrulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.8. com.coreos.monitoring.v1beta1.AlertmanagerConfigList schema Description AlertmanagerConfigList is a list of AlertmanagerConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertmanagerConfig) List of alertmanagerconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.9. com.coreos.operators.v1.OLMConfigList schema Description OLMConfigList is a list of OLMConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OLMConfig) List of olmconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.10. com.coreos.operators.v1.OperatorGroupList schema Description OperatorGroupList is a list of OperatorGroup Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorGroup) List of operatorgroups. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.11. com.coreos.operators.v1.OperatorList schema Description OperatorList is a list of Operator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Operator) List of operators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.12. com.coreos.operators.v1alpha1.CatalogSourceList schema Description CatalogSourceList is a list of CatalogSource Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CatalogSource) List of catalogsources. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.13. com.coreos.operators.v1alpha1.ClusterServiceVersionList schema Description ClusterServiceVersionList is a list of ClusterServiceVersion Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterServiceVersion) List of clusterserviceversions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.14. com.coreos.operators.v1alpha1.InstallPlanList schema Description InstallPlanList is a list of InstallPlan Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (InstallPlan) List of installplans. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.15. com.coreos.operators.v1alpha1.SubscriptionList schema Description SubscriptionList is a list of Subscription Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Subscription) List of subscriptions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.16. com.coreos.operators.v2.OperatorConditionList schema Description OperatorConditionList is a list of OperatorCondition Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorCondition) List of operatorconditions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.17. com.github.openshift.api.apps.v1.DeploymentConfigList schema Description DeploymentConfigList is a collection of deployment configs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DeploymentConfig) Items is a list of deployment configs kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.18. com.github.openshift.api.authorization.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.19. com.github.openshift.api.authorization.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.20. com.github.openshift.api.authorization.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.21. com.github.openshift.api.authorization.v1.RoleList schema Description RoleList is a collection of Roles Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.22. com.github.openshift.api.build.v1.BuildConfigList schema Description BuildConfigList is a collection of BuildConfigs. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BuildConfig) items is a list of build configs kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.23. com.github.openshift.api.build.v1.BuildList schema Description BuildList is a collection of Builds. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Build) items is a list of builds kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.24. com.github.openshift.api.image.v1.ImageList schema Description ImageList is a list of Image objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Image) Items is a list of images kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.25. com.github.openshift.api.image.v1.ImageStreamList schema Description ImageStreamList is a list of ImageStream objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageStream) Items is a list of imageStreams kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.26. com.github.openshift.api.image.v1.ImageStreamTagList schema Description ImageStreamTagList is a list of ImageStreamTag objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageStreamTag) Items is the list of image stream tags kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.27. com.github.openshift.api.image.v1.ImageTagList schema Description ImageTagList is a list of ImageTag objects. When listing image tags, the image field is not populated. Tags are returned in alphabetical order by image stream and then tag. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageTag) Items is the list of image stream tags kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.28. com.github.openshift.api.oauth.v1.OAuthAccessTokenList schema Description OAuthAccessTokenList is a collection of OAuth access tokens Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthAccessToken) Items is the list of OAuth access tokens kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.29. com.github.openshift.api.oauth.v1.OAuthAuthorizeTokenList schema Description OAuthAuthorizeTokenList is a collection of OAuth authorization tokens Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthAuthorizeToken) Items is the list of OAuth authorization tokens kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.30. com.github.openshift.api.oauth.v1.OAuthClientAuthorizationList schema Description OAuthClientAuthorizationList is a collection of OAuth client authorizations Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthClientAuthorization) Items is the list of OAuth client authorizations kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.31. com.github.openshift.api.oauth.v1.OAuthClientList schema Description OAuthClientList is a collection of OAuth clients Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuthClient) Items is the list of OAuth clients kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.32. com.github.openshift.api.oauth.v1.UserOAuthAccessTokenList schema Description UserOAuthAccessTokenList is a collection of access tokens issued on behalf of the requesting user Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (UserOAuthAccessToken) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.33. com.github.openshift.api.project.v1.ProjectList schema Description ProjectList is a list of Project objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Project) Items is the list of projects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.34. com.github.openshift.api.quota.v1.AppliedClusterResourceQuotaList schema Description AppliedClusterResourceQuotaList is a collection of AppliedClusterResourceQuotas Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AppliedClusterResourceQuota) Items is a list of AppliedClusterResourceQuota kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.35. com.github.openshift.api.route.v1.RouteList schema Description RouteList is a collection of Routes. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Route) items is a list of routes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.36. com.github.openshift.api.security.v1.RangeAllocationList schema Description RangeAllocationList is a list of RangeAllocations objects Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RangeAllocation) List of RangeAllocations. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.37. com.github.openshift.api.template.v1.BrokerTemplateInstanceList schema Description BrokerTemplateInstanceList is a list of BrokerTemplateInstance objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BrokerTemplateInstance) items is a list of BrokerTemplateInstances kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.38. com.github.openshift.api.template.v1.TemplateInstanceList schema Description TemplateInstanceList is a list of TemplateInstance objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (TemplateInstance) items is a list of Templateinstances kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.39. com.github.openshift.api.template.v1.TemplateList schema Description TemplateList is a list of Template objects. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Template) Items is a list of templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.40. com.github.openshift.api.user.v1.GroupList schema Description GroupList is a collection of Groups Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Group) Items is the list of groups kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.41. com.github.openshift.api.user.v1.IdentityList schema Description IdentityList is a collection of Identities Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Identity) Items is the list of identities kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.42. com.github.openshift.api.user.v1.UserList schema Description UserList is a collection of Users Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (User) Items is the list of users kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.43. com.github.operator-framework.api.pkg.lib.version.OperatorVersion schema Description OperatorVersion is a wrapper around semver.Version which supports correct marshaling to YAML and JSON. Type string 1.44. com.github.operator-framework.api.pkg.operators.v1alpha1.APIServiceDefinitions schema Description APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. Type object Schema Property Type Description owned array (APIServiceDescription) required array (APIServiceDescription) 1.45. com.github.operator-framework.api.pkg.operators.v1alpha1.CustomResourceDefinitions schema Description CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. If the CRD is present in the Owned list, it is implicitly required. Type object Schema Property Type Description owned array (CRDDescription) required array (CRDDescription) 1.46. com.github.operator-framework.api.pkg.operators.v1alpha1.InstallMode schema Description InstallMode associates an InstallModeType with a flag representing if the CSV supports it Type object Required type supported Schema Property Type Description supported boolean type string 1.47. com.github.operator-framework.operator-lifecycle-manager.pkg.package-server.apis.operators.v1.PackageManifestList schema Description PackageManifestList is a list of PackageManifest objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PackageManifest) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.48. io.cncf.cni.k8s.v1.NetworkAttachmentDefinitionList schema Description NetworkAttachmentDefinitionList is a list of NetworkAttachmentDefinition Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkAttachmentDefinition) List of network-attachment-definitions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.49. io.cncf.cni.whereabouts.v1alpha1.IPPoolList schema Description IPPoolList is a list of IPPool Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IPPool) List of ippools. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.50. io.cncf.cni.whereabouts.v1alpha1.OverlappingRangeIPReservationList schema Description OverlappingRangeIPReservationList is a list of OverlappingRangeIPReservation Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OverlappingRangeIPReservation) List of overlappingrangeipreservations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.51. io.k8s.api.admissionregistration.v1.MutatingWebhookConfigurationList schema Description MutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MutatingWebhookConfiguration) List of MutatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.52. io.k8s.api.admissionregistration.v1.ValidatingWebhookConfigurationList schema Description ValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ValidatingWebhookConfiguration) List of ValidatingWebhookConfiguration. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.53. io.k8s.api.apps.v1.ControllerRevisionList schema Description ControllerRevisionList is a resource containing a list of ControllerRevision objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerRevision) Items is the list of ControllerRevisions kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.54. io.k8s.api.apps.v1.DaemonSetList schema Description DaemonSetList is a collection of daemon sets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DaemonSet) A list of daemon sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.55. io.k8s.api.apps.v1.DeploymentList schema Description DeploymentList is a list of Deployments. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Deployment) Items is the list of Deployments. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.56. io.k8s.api.apps.v1.ReplicaSetList schema Description ReplicaSetList is a collection of ReplicaSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicaSet) List of ReplicaSets. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.57. io.k8s.api.apps.v1.StatefulSetList schema Description StatefulSetList is a collection of StatefulSets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StatefulSet) Items is the list of stateful sets. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.58. io.k8s.api.autoscaling.v2.HorizontalPodAutoscalerList schema Description HorizontalPodAutoscalerList is a list of horizontal pod autoscaler objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HorizontalPodAutoscaler) items is the list of horizontal pod autoscaler objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. 1.59. io.k8s.api.batch.v1.CronJobList schema Description CronJobList is a collection of cron jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CronJob) items is the list of CronJobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.60. io.k8s.api.batch.v1.JobList schema Description JobList is a collection of jobs. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Job) items is the list of Jobs. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.61. io.k8s.api.certificates.v1.CertificateSigningRequestList schema Description CertificateSigningRequestList is a collection of CertificateSigningRequest objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CertificateSigningRequest) items is a collection of CertificateSigningRequest objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta 1.62. io.k8s.api.coordination.v1.LeaseList schema Description LeaseList is a list of Lease objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Lease) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.63. io.k8s.api.core.v1.ComponentStatusList schema Description Status of all the conditions for the component as a list of ComponentStatus objects. Deprecated: This API is deprecated in v1.19+ Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ComponentStatus) List of ComponentStatus objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.64. io.k8s.api.core.v1.ConfigMapList schema Description ConfigMapList is a resource containing a list of ConfigMap objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConfigMap) Items is the list of ConfigMaps. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.65. io.k8s.api.core.v1.ConfigMapVolumeSource schema Description Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names optional boolean optional specify whether the ConfigMap or its keys must be defined 1.66. io.k8s.api.core.v1.CSIVolumeSource schema Description Represents a source location of a volume to mount, managed by an external CSI driver Type object Required driver Schema Property Type Description driver string driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. fsType string fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. nodePublishSecretRef LocalObjectReference nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. readOnly boolean readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). volumeAttributes object (string) volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. 1.67. io.k8s.api.core.v1.EndpointsList schema Description EndpointsList is a list of endpoints. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Endpoints) List of endpoints. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.68. io.k8s.api.core.v1.EnvVar schema Description EnvVar represents an environment variable present in a Container. Type object Required name Schema Property Type Description name string Name of the environment variable. Must be a C_IDENTIFIER. value string Variable references USD(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double are reduced to a single USD, which allows for escaping the USD(VAR_NAME) syntax: i.e. "(VAR_NAME)" will produce the string literal "USD(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "". valueFrom EnvVarSource Source for the environment variable's value. Cannot be used if value is not empty. 1.69. io.k8s.api.core.v1.EventList schema Description EventList is a list of events. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) List of events kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.70. io.k8s.api.core.v1.EventSource schema Description EventSource contains information for an event. Type object Schema Property Type Description component string Component from which the event is generated. host string Node name on which the event is generated. 1.71. io.k8s.api.core.v1.LimitRangeList schema Description LimitRangeList is a list of LimitRange items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (LimitRange) Items is a list of LimitRange objects. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.72. io.k8s.api.core.v1.LocalObjectReference schema Description LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. Type object Schema Property Type Description name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names 1.73. io.k8s.api.core.v1.NamespaceCondition schema Description NamespaceCondition contains details about state of namespace. Type object Required type status Schema Property Type Description lastTransitionTime Time message string reason string status string Status of the condition, one of True, False, Unknown. type string Type of namespace controller condition. 1.74. io.k8s.api.core.v1.NamespaceList schema Description NamespaceList is a list of Namespaces. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Namespace) Items is the list of Namespace objects in the list. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.75. io.k8s.api.core.v1.NodeList schema Description NodeList is the whole list of all Nodes which have been registered with master. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.76. io.k8s.api.core.v1.ObjectReference schema Description ObjectReference contains enough information to let you inspect or modify the referred object. Type object Schema Property Type Description apiVersion string API version of the referent. fieldPath string If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. kind string Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names namespace string Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/ resourceVersion string Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency uid string UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids 1.77. io.k8s.api.core.v1.PersistentVolumeClaim schema Description PersistentVolumeClaim is a user's request for and claim to a persistent volume Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes status object PersistentVolumeClaimStatus is the current status of a persistent volume claim. ..spec Description:: + PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes Type object Property Type Description accessModes array (string) accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 dataSource object TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. dataSourceRef object dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. resources object ResourceRequirements describes the compute resource requirements. selector LabelSelector selector is a label query over volumes to consider for binding. storageClassName string storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1 volumeMode string volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. volumeName string volumeName is the binding reference to the PersistentVolume backing this claim. ..spec.dataSource Description:: + TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced ..spec.dataSourceRef Description:: + dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn't specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn't set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled. Type object Required kind name Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced namespace string Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. ..spec.resources Description:: + ResourceRequirements describes the compute resource requirements. Type object Property Type Description claims array Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. claims[] object ResourceClaim references one entry in PodSpec.ResourceClaims. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ ..spec.resources.claims Description:: + Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. Type array ..spec.resources.claims[] Description:: + ResourceClaim references one entry in PodSpec.ResourceClaims. Type object Required name Property Type Description name string Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. ..status Description:: + PersistentVolumeClaimStatus is the current status of a persistent volume claim. Type object Property Type Description accessModes array (string) accessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1 allocatedResourceStatuses object (string) allocatedResourceStatuses stores status of resource being resized for the given PVC. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. ClaimResourceStatus can be in any of following states: - ControllerResizeInProgress: State set when resize controller starts resizing the volume in control-plane. - ControllerResizeFailed: State set when resize has failed in resize controller with a terminal error. - NodeResizePending: State set when resize controller has finished resizing the volume but further resizing of volume is needed on the node. - NodeResizeInProgress: State set when kubelet starts resizing the volume. - NodeResizeFailed: State set when resizing has failed in kubelet with a terminal error. Transient errors don't set NodeResizeFailed. For example: if expanding a PVC for more capacity - this field can be one of the following states: - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "ControllerResizeFailed" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizePending" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeInProgress" - pvc.status.allocatedResourceStatus['storage'] = "NodeResizeFailed" When this field is not set, it means that no resize operation is in progress for the given PVC. A controller that receives PVC update with previously unknown resourceName or ClaimResourceStatus should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. allocatedResources object (Quantity) allocatedResources tracks the resources allocated to a PVC including its capacity. Key names follow standard Kubernetes label syntax. Valid values are either: * Un-prefixed keys: - storage - the capacity of the volume. * Custom resources must use implementation-defined prefixed names such as "example.com/my-custom-resource" Apart from above values - keys that are unprefixed or have kubernetes.io prefix are considered reserved and hence may not be used. Capacity reported here may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. A controller that receives PVC update with previously unknown resourceName should ignore the update for the purpose it was designed. For example - a controller that only is responsible for resizing capacity of the volume, should ignore PVC updates that change other valid resources associated with PVC. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature. capacity object (Quantity) capacity represents the actual resources of the underlying volume. conditions array conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. conditions[] object PersistentVolumeClaimCondition contains details about state of pvc phase string phase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound ..status.conditions Description:: + conditions is the current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'. Type array ..status.conditions[] Description:: + PersistentVolumeClaimCondition contains details about state of pvc Type object Required type status Property Type Description lastProbeTime Time lastProbeTime is the time we probed the condition. lastTransitionTime Time lastTransitionTime is the time the condition transitioned from one status to another. message string message is the human-readable message indicating details about last transition. reason string reason is a unique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized. status string type string 1.78. io.k8s.api.core.v1.PersistentVolumeClaimList schema Description PersistentVolumeClaimList is a list of PersistentVolumeClaim items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolumeClaim) items is a list of persistent volume claims. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.79. io.k8s.api.core.v1.PersistentVolumeList schema Description PersistentVolumeList is a list of PersistentVolume items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PersistentVolume) items is a list of persistent volumes. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.80. io.k8s.api.core.v1.PersistentVolumeSpec schema Description PersistentVolumeSpec is the specification of a persistent volume. Type object Schema Property Type Description accessModes array (string) accessModes contains all ways the volume can be mounted. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes awsElasticBlockStore AWSElasticBlockStoreVolumeSource awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore azureDisk AzureDiskVolumeSource azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. azureFile AzureFilePersistentVolumeSource azureFile represents an Azure File Service mount on the host and bind mount to the pod. capacity object (Quantity) capacity is the description of the persistent volume's resources and capacity. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity cephfs CephFSPersistentVolumeSource cephFS represents a Ceph FS mount on the host that shares a pod's lifetime cinder CinderPersistentVolumeSource cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md claimRef ObjectReference claimRef is part of a bi-directional binding between PersistentVolume and PersistentVolumeClaim. Expected to be non-nil when bound. claim.VolumeName is the authoritative bind between PV and PVC. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#binding csi CSIPersistentVolumeSource csi represents storage that is handled by an external CSI driver (Beta feature). fc FCVolumeSource fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. flexVolume FlexPersistentVolumeSource flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. flocker FlockerVolumeSource flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage. This depends on the Flocker control service being running gcePersistentDisk GCEPersistentDiskVolumeSource gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk glusterfs GlusterfsPersistentVolumeSource glusterfs represents a Glusterfs volume that is attached to a host and exposed to the pod. Provisioned by an admin. More info: https://examples.k8s.io/volumes/glusterfs/README.md hostPath HostPathVolumeSource hostPath represents a directory on the host. Provisioned by a developer or tester. This is useful for single-node development and testing only! On-host storage is not supported in any way and WILL NOT WORK in a multi-node cluster. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath iscsi ISCSIPersistentVolumeSource iscsi represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. Provisioned by an admin. local LocalVolumeSource local represents directly-attached storage with node affinity mountOptions array (string) mountOptions is the list of mount options, e.g. ["ro", "soft"]. Not validated - mount will simply fail if one is invalid. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#mount-options nfs NFSVolumeSource nfs represents an NFS mount on the host. Provisioned by an admin. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs nodeAffinity VolumeNodeAffinity nodeAffinity defines constraints that limit what nodes this volume can be accessed from. This field influences the scheduling of pods that use this volume. persistentVolumeReclaimPolicy string persistentVolumeReclaimPolicy defines what happens to a persistent volume when released from its claim. Valid options are Retain (default for manually created PersistentVolumes), Delete (default for dynamically provisioned PersistentVolumes), and Recycle (deprecated). Recycle must be supported by the volume plugin underlying this PersistentVolume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#reclaiming Possible enum values: - "Delete" means the volume will be deleted from Kubernetes on release from its claim. The volume plugin must support Deletion. - "Recycle" means the volume will be recycled back into the pool of unbound persistent volumes on release from its claim. The volume plugin must support Recycling. - "Retain" means the volume will be left in its current phase (Released) for manual reclamation by the administrator. The default policy is Retain. photonPersistentDisk PhotonPersistentDiskVolumeSource photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine portworxVolume PortworxVolumeSource portworxVolume represents a portworx volume attached and mounted on kubelets host machine quobyte QuobyteVolumeSource quobyte represents a Quobyte mount on the host that shares a pod's lifetime rbd RBDPersistentVolumeSource rbd represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md scaleIO ScaleIOPersistentVolumeSource scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. storageClassName string storageClassName is the name of StorageClass to which this persistent volume belongs. Empty value means that this volume does not belong to any StorageClass. storageos StorageOSPersistentVolumeSource storageOS represents a StorageOS volume that is attached to the kubelet's host machine and mounted into the pod More info: https://examples.k8s.io/volumes/storageos/README.md volumeMode string volumeMode defines if a volume is intended to be used with a formatted filesystem or to remain in raw block state. Value of Filesystem is implied when not included in spec. Possible enum values: - "Block" means the volume will not be formatted with a filesystem and will remain a raw block device. - "Filesystem" means the volume will be or is formatted with a filesystem. vsphereVolume VsphereVirtualDiskVolumeSource vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine 1.81. io.k8s.api.core.v1.PodList schema Description PodList is a list of Pods. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Pod) List of pods. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.82. io.k8s.api.core.v1.PodTemplateList schema Description PodTemplateList is a list of PodTemplates. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodTemplate) List of pod templates kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.83. io.k8s.api.core.v1.PodTemplateSpec schema Description PodTemplateSpec describes the data a pod should have when created from a template Type object Schema Property Type Description metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec PodSpec Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.84. io.k8s.api.core.v1.ReplicationControllerList schema Description ReplicationControllerList is a collection of replication controllers. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ReplicationController) List of replication controllers. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.85. io.k8s.api.core.v1.ResourceQuotaList schema Description ResourceQuotaList is a list of ResourceQuota items. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ResourceQuota) Items is a list of ResourceQuota objects. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.86. io.k8s.api.core.v1.ResourceQuotaSpec schema Description ResourceQuotaSpec defines the desired hard limits to enforce for Quota. Type object Schema Property Type Description hard object (Quantity) hard is the set of desired hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ scopeSelector ScopeSelector scopeSelector is also a collection of filters like scopes that must match each object tracked by a quota but expressed using ScopeSelectorOperator in combination with possible values. For a resource to match, both scopes AND scopeSelector (if specified in spec), must be matched. scopes array (string) A collection of filters that must match each object tracked by a quota. If not specified, the quota matches all objects. 1.87. io.k8s.api.core.v1.ResourceQuotaStatus schema Description ResourceQuotaStatus defines the enforced hard limits and observed use. Type object Schema Property Type Description hard object (Quantity) Hard is the set of enforced hard limits for each named resource. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/ used object (Quantity) Used is the current observed total usage of the resource in the namespace. 1.88. io.k8s.api.core.v1.ResourceRequirements schema Description ResourceRequirements describes the compute resource requirements. Type object Schema Property Type Description claims array (ResourceClaim) Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. This field is immutable. It can only be set for containers. limits object (Quantity) Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ requests object (Quantity) Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 1.89. io.k8s.api.core.v1.Secret schema Description Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data object (string) Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https://tools.ietf.org/html/rfc4648#section-4 immutable boolean Immutable, if set to true, ensures that data stored in the Secret cannot be updated (only object metadata can be modified). If not set to true, the field can be modified at any time. Defaulted to nil. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata stringData object (string) stringData allows specifying non-binary secret data in string form. It is provided as a write-only input field for convenience. All keys and values are merged into the data field on write, overwriting any existing values. The stringData field is never output when reading from the API. type string Used to facilitate programmatic handling of secret data. More info: https://kubernetes.io/docs/concepts/configuration/secret/#secret-types 1.90. io.k8s.api.core.v1.SecretList schema Description SecretList is a list of Secret. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Secret) Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.91. io.k8s.api.core.v1.SecretVolumeSource schema Description Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling. Type object Schema Property Type Description defaultMode integer defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. items array (KeyToPath) items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. optional boolean optional field specify whether the Secret or its keys must be defined secretName string secretName is the name of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret 1.92. io.k8s.api.core.v1.ServiceAccountList schema Description ServiceAccountList is a list of ServiceAccount objects Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceAccount) List of ServiceAccounts. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.93. io.k8s.api.core.v1.ServiceList schema Description ServiceList holds a list of services. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Service) List of services kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.94. io.k8s.api.core.v1.Toleration schema Description The pod this Toleration is attached to tolerates any taint that matches the triple <key,value,effect> using the matching operator <operator>. Type object Schema Property Type Description effect string Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. operator string Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists" tolerationSeconds integer TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. value string Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. 1.95. io.k8s.api.core.v1.TopologySelectorTerm schema Description A topology selector term represents the result of label queries. A null or empty topology selector term matches no objects. The requirements of them are ANDed. It provides a subset of functionality as NodeSelectorTerm. This is an alpha feature and may change in the future. Type object Schema Property Type Description matchLabelExpressions array (TopologySelectorLabelRequirement) A list of topology selector requirements by labels. 1.96. io.k8s.api.core.v1.TypedLocalObjectReference schema Description TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace. Type object Required kind name Schema Property Type Description apiGroup string APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. kind string Kind is the type of resource being referenced name string Name is the name of resource being referenced 1.97. io.k8s.api.discovery.v1.EndpointSliceList schema Description EndpointSliceList represents a list of endpoint slices Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EndpointSlice) items is the list of endpoint slices kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.98. io.k8s.api.events.v1.EventList schema Description EventList is a list of Event objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Event) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.99. io.k8s.api.flowcontrol.v1beta3.FlowSchemaList schema Description FlowSchemaList is a list of FlowSchema objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FlowSchema) items is a list of FlowSchemas. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.100. io.k8s.api.flowcontrol.v1beta3.PriorityLevelConfigurationList schema Description PriorityLevelConfigurationList is a list of PriorityLevelConfiguration objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityLevelConfiguration) items is a list of request-priorities. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.101. io.k8s.api.networking.v1.IngressClassList schema Description IngressClassList is a collection of IngressClasses. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressClass) items is the list of IngressClasses. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. 1.102. io.k8s.api.networking.v1.IngressList schema Description IngressList is a collection of Ingress. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) items is the list of Ingress. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.103. io.k8s.api.networking.v1.NetworkPolicyList schema Description NetworkPolicyList is a list of NetworkPolicy objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (NetworkPolicy) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.104. io.k8s.api.node.v1.RuntimeClassList schema Description RuntimeClassList is a list of RuntimeClass objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RuntimeClass) items is a list of schema objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.105. io.k8s.api.policy.v1.PodDisruptionBudgetList schema Description PodDisruptionBudgetList is a collection of PodDisruptionBudgets. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodDisruptionBudget) Items is a list of PodDisruptionBudgets kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.106. io.k8s.api.rbac.v1.AggregationRule schema Description AggregationRule describes how to locate ClusterRoles to aggregate into the ClusterRole Type object Schema Property Type Description clusterRoleSelectors array (LabelSelector) ClusterRoleSelectors holds a list of selectors which will be used to find ClusterRoles and create the rules. If any of the selectors match, then the ClusterRole's permissions will be added 1.107. io.k8s.api.rbac.v1.ClusterRoleBindingList schema Description ClusterRoleBindingList is a collection of ClusterRoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRoleBinding) Items is a list of ClusterRoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.108. io.k8s.api.rbac.v1.ClusterRoleList schema Description ClusterRoleList is a collection of ClusterRoles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterRole) Items is a list of ClusterRoles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.109. io.k8s.api.rbac.v1.RoleBindingList schema Description RoleBindingList is a collection of RoleBindings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBinding) Items is a list of RoleBindings kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.110. io.k8s.api.rbac.v1.RoleList schema Description RoleList is a collection of Roles Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Role) Items is a list of Roles kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata. 1.111. io.k8s.api.scheduling.v1.PriorityClassList schema Description PriorityClassList is a collection of priority classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PriorityClass) items is the list of PriorityClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.112. io.k8s.api.storage.v1.CSIDriverList schema Description CSIDriverList is a collection of CSIDriver objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIDriver) items is the list of CSIDriver kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.113. io.k8s.api.storage.v1.CSINodeList schema Description CSINodeList is a collection of CSINode objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSINode) items is the list of CSINode kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.114. io.k8s.api.storage.v1.CSIStorageCapacityList schema Description CSIStorageCapacityList is a collection of CSIStorageCapacity objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSIStorageCapacity) items is the list of CSIStorageCapacity objects. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.115. io.k8s.api.storage.v1.StorageClassList schema Description StorageClassList is a collection of storage classes. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageClass) items is the list of StorageClasses kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.116. io.k8s.api.storage.v1.VolumeAttachmentList schema Description VolumeAttachmentList is a collection of VolumeAttachment objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeAttachment) items is the list of VolumeAttachments kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.117. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.CustomResourceDefinitionList schema Description CustomResourceDefinitionList is a list of CustomResourceDefinition objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CustomResourceDefinition) items list individual CustomResourceDefinition objects kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.118. io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1.JSONSchemaProps schema Description JSONSchemaProps is a JSON-Schema following Specification Draft 4 ( http://json-schema.org/ ). Type object Schema Property Type Description USDref string USDschema string additionalItems `` additionalProperties `` allOf array (undefined) anyOf array (undefined) default JSON` default is a default value for undefined object fields. Defaulting is a beta feature under the CustomResourceDefaulting feature gate. Defaulting requires spec.preserveUnknownFields to be false. definitions object (undefined) dependencies object (undefined) description string enum array (JSON) example JSON exclusiveMaximum boolean exclusiveMinimum boolean externalDocs ExternalDocumentation format string format is an OpenAPI v3 format string. Unknown formats are ignored. The following formats are validated: - bsonobjectid: a bson object ID, i.e. a 24 characters hex string - uri: an URI as parsed by Golang net/url.ParseRequestURI - email: an email address as parsed by Golang net/mail.ParseAddress - hostname: a valid representation for an Internet host name, as defined by RFC 1034, section 3.1 [RFC1034]. - ipv4: an IPv4 IP as parsed by Golang net.ParseIP - ipv6: an IPv6 IP as parsed by Golang net.ParseIP - cidr: a CIDR as parsed by Golang net.ParseCIDR - mac: a MAC address as parsed by Golang net.ParseMAC - uuid: an UUID that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid3: an UUID3 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?3[0-9a-f]{3}-?[0-9a-f]{4}-?[0-9a-f]{12}USD - uuid4: an UUID4 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?4[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - uuid5: an UUID5 that allows uppercase defined by the regex (?i)^[0-9a-f]{8}-?[0-9a-f]{4}-?5[0-9a-f]{3}-?[89ab][0-9a-f]{3}-?[0-9a-f]{12}USD - isbn: an ISBN10 or ISBN13 number string like "0321751043" or "978-0321751041" - isbn10: an ISBN10 number string like "0321751043" - isbn13: an ISBN13 number string like "978-0321751041" - creditcard: a credit card number defined by the regex ^(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|6(?:011|5[0-9][0-9])[0-9]{12}|3[47][0-9]{13}|3(?:0[0-5]|[68][0-9])[0-9]{11}|(?:2131|1800|35\d{3})\d{11})USD with any non digit characters mixed in - ssn: a U.S. social security number following the regex ^\d{3}[- ]?\d{2}[- ]?\d{4}USD - hexcolor: an hexadecimal color code like " FFFFFF: following the regex ^ ?([0-9a-fA-F]{3}|[0-9a-fA-F]{6})USD - rgbcolor: an RGB color code like rgb like "rgb(255,255,2559" - byte: base64 encoded binary data - password: any kind of string - date: a date string like "2006-01-02" as defined by full-date in RFC3339 - duration: a duration string like "22 ns" as parsed by Golang time.ParseDuration or compatible with Scala duration format - datetime: a date time string like "2014-12-15T19:30:20.000Z" as defined by date-time in RFC3339. id string items `` maxItems integer maxLength integer maxProperties integer maximum number minItems integer minLength integer minProperties integer minimum number multipleOf number not `` nullable boolean oneOf array (undefined) pattern string patternProperties object (undefined) properties object (undefined) required array (string) title string type string uniqueItems boolean x-kubernetes-embedded-resource boolean x-kubernetes-embedded-resource defines that the value is an embedded Kubernetes runtime.Object, with TypeMeta and ObjectMeta. The type must be object. It is allowed to further restrict the embedded object. kind, apiVersion and metadata are validated automatically. x-kubernetes-preserve-unknown-fields is allowed to be true, but does not have to be if the object is fully specified (up to kind, apiVersion, metadata). x-kubernetes-int-or-string boolean x-kubernetes-int-or-string specifies that this value is either an integer or a string. If this is true, an empty type is allowed and type as child of anyOf is permitted if following one of the following patterns: 1) anyOf: - type: integer - type: string 2) allOf: - anyOf: - type: integer - type: string - ... zero or more x-kubernetes-list-map-keys array (string) x-kubernetes-list-map-keys annotates an array with the x-kubernetes-list-type map by specifying the keys used as the index of the map. This tag MUST only be used on lists that have the "x-kubernetes-list-type" extension set to "map". Also, the values specified for this attribute must be a scalar typed field of the child structure (no nesting is supported). The properties specified must either be required or have a default value, to ensure those properties are present for all list items. x-kubernetes-list-type string x-kubernetes-list-type annotates an array to further describe its topology. This extension must only be used on lists and may have 3 possible values: 1) atomic : the list is treated as a single entity, like a scalar. Atomic lists will be entirely replaced when updated. This extension may be used on any type of list (struct, scalar, ... ). 2) set : Sets are lists that must not have multiple items with the same value. Each value must be a scalar, an object with x-kubernetes-map-type atomic or an array with x-kubernetes-list-type atomic . 3) map : These lists are like maps in that their elements have a non-index key used to identify them. Order is preserved upon merge. The map tag must only be used on a list with elements of type object. Defaults to atomic for arrays. x-kubernetes-map-type string x-kubernetes-map-type annotates an object to further describe its topology. This extension must only be used when type is object and may have 2 possible values: 1) granular : These maps are actual maps (key-value pairs) and each fields are independent from each other (they can each be manipulated by separate actors). This is the default behaviour for all maps. 2) atomic : the list is treated as a single entity, like a scalar. Atomic maps will be entirely replaced when updated. x-kubernetes-preserve-unknown-fields boolean x-kubernetes-preserve-unknown-fields stops the API server decoding step from pruning fields which are not specified in the validation schema. This affects fields recursively, but switches back to normal pruning behaviour if nested properties or additionalProperties are specified in the schema. This can either be true or undefined. False is forbidden. x-kubernetes-validations array (ValidationRule) x-kubernetes-validations describes a list of validation rules written in the CEL expression language. This field is an alpha-level. Using this field requires the feature gate CustomResourceValidationExpressions to be enabled. 1.119. io.k8s.apimachinery.pkg.api.resource.Quantity schema Description Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: <digit> ::= 0 \| 1 \| ... \| 9 <digits> ::= <digit> \| <digit><digits> <number> ::= <digits> \| <digits>.<digits> \| <digits>. \| .<digits> <sign> ::= "+" \| "-" <signedNumber> ::= <number> \| <sign><number> <suffix> ::= <binarySI> \| <decimalExponent> \| <decimalSI> <binarySI> ::= Ki \| Mi \| Gi \| Ti \| Pi \| Ei <decimalSI> ::= m \| "" \| k \| M \| G \| T \| P \| E <decimalExponent> ::= "e" <signedNumber> \| "E" <signedNumber> No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in "canonical form". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: No precision is lost - No fractional digits will be emitted - The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as "1500m" - 1.5Gi will be serialized as "1536Mi" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation. Type string 1.120. io.k8s.apimachinery.pkg.apis.meta.v1.Condition schema Description Condition contains details for one aspect of the current state of this API Resource. Type object Required type status lastTransitionTime reason message Schema Property Type Description lastTransitionTime Time lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. 1.121. io.k8s.apimachinery.pkg.apis.meta.v1.DeleteOptions schema Description DeleteOptions may be provided when deleting an API object. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dryRun array (string) When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. preconditions Preconditions Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be returned. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. 1.122. io.k8s.apimachinery.pkg.apis.meta.v1.GroupVersionKind schema Description GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling Type object Required group version kind Schema Property Type Description group string kind string version string 1.123. io.k8s.apimachinery.pkg.apis.meta.v1.LabelSelector schema Description A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. Type object Schema Property Type Description matchExpressions array (LabelSelectorRequirement) matchExpressions is a list of label selector requirements. The requirements are ANDed. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 1.124. io.k8s.apimachinery.pkg.apis.meta.v1.ListMeta schema Description ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}. Type object Schema Property Type Description continue string continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message. remainingItemCount integer remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is estimating the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact. resourceVersion string String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. 1.125. io.k8s.apimachinery.pkg.apis.meta.v1.MicroTime schema Description MicroTime is version of Time with microsecond level precision. Type string 1.126. io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta schema Description ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create. Type object Schema Property Type Description annotations object (string) Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations creationTimestamp Time CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata deletionGracePeriodSeconds integer Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only. deletionTimestamp Time DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata finalizers array (string) Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list. generateName string GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will return a 409. Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency generation integer A sequence number representing a specific generation of the desired state. Populated by the system. Read-only. labels object (string) Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels managedFields array (ManagedFieldsEntry) ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object. name string Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#names namespace string Namespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces ownerReferences array (OwnerReference) List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller. resourceVersion string An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency selfLink string Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. uid string UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names#uids 1.127. io.k8s.apimachinery.pkg.apis.meta.v1.Status schema Description Status is a return value for calls that don't return other objects. Type object Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources code integer Suggested HTTP return code for this status, 0 if not set. details StatusDetails Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds message string A human-readable description of the status of this operation. metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds reason string A machine-readable description of why this operation is in the "Failure" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it. status string Status of the operation. One of: "Success" or "Failure". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status 1.128. io.k8s.apimachinery.pkg.apis.meta.v1.Time schema Description Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers. Type string 1.129. io.k8s.apimachinery.pkg.apis.meta.v1.WatchEvent schema Description Event represents a single event to a watched resource. Type object Required type object Schema Property Type Description object RawExtension Object is: * If Type is Added or Modified: the new state of the object. * If Type is Deleted: the state of the object immediately before deletion. * If Type is Error: *Status is recommended; other types may make sense depending on context. type string 1.130. io.k8s.apimachinery.pkg.runtime.RawExtension schema Description RawExtension is used to hold extensions in external versions. To use this, make a field which has RawExtension as its type in your external, versioned struct, and Object in your internal struct. You also need to register your various plugin types. So what happens? Decode first uses json or yaml to unmarshal the serialized data into your external MyAPIObject. That causes the raw JSON to be stored, but not unpacked. The step is to copy (using pkg/conversion) into the internal struct. The runtime package's DefaultScheme has conversion functions installed which will unpack the JSON stored in RawExtension, turning it into the correct object type, and storing it in the Object. (TODO: In the case where the object is of an unknown type, a runtime.Unknown object will be created and stored.) Type object 1.131. io.k8s.apimachinery.pkg.util.intstr.IntOrString schema Description IntOrString is a type that can hold an int32 or a string. When used in JSON or YAML marshalling and unmarshalling, it produces or consumes the inner type. This allows you to have, for example, a JSON field that can accept a name or number. Type string 1.132. io.k8s.kube-aggregator.pkg.apis.apiregistration.v1.APIServiceList schema Description APIServiceList is a list of APIService objects. Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIService) Items is the list of APIService kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 1.133. io.k8s.migration.v1alpha1.StorageStateList schema Description StorageStateList is a list of StorageState Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageState) List of storagestates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.134. io.k8s.migration.v1alpha1.StorageVersionMigrationList schema Description StorageVersionMigrationList is a list of StorageVersionMigration Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (StorageVersionMigration) List of storageversionmigrations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.135. io.k8s.storage.snapshot.v1.VolumeSnapshotClassList schema Description VolumeSnapshotClassList is a list of VolumeSnapshotClass Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotClass) List of volumesnapshotclasses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.136. io.k8s.storage.snapshot.v1.VolumeSnapshotContentList schema Description VolumeSnapshotContentList is a list of VolumeSnapshotContent Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshotContent) List of volumesnapshotcontents. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.137. io.k8s.storage.snapshot.v1.VolumeSnapshotList schema Description VolumeSnapshotList is a list of VolumeSnapshot Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (VolumeSnapshot) List of volumesnapshots. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.138. io.metal3.v1alpha1.BareMetalHostList schema Description BareMetalHostList is a list of BareMetalHost Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BareMetalHost) List of baremetalhosts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.139. io.metal3.v1alpha1.BMCEventSubscriptionList schema Description BMCEventSubscriptionList is a list of BMCEventSubscription Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (BMCEventSubscription) List of bmceventsubscriptions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.140. io.metal3.v1alpha1.FirmwareSchemaList schema Description FirmwareSchemaList is a list of FirmwareSchema Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FirmwareSchema) List of firmwareschemas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.141. io.metal3.v1alpha1.HardwareDataList schema Description HardwareDataList is a list of HardwareData Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HardwareData) List of hardwaredata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.142. io.metal3.v1alpha1.HostFirmwareSettingsList schema Description HostFirmwareSettingsList is a list of HostFirmwareSettings Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HostFirmwareSettings) List of hostfirmwaresettings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.143. io.metal3.v1alpha1.PreprovisioningImageList schema Description PreprovisioningImageList is a list of PreprovisioningImage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PreprovisioningImage) List of preprovisioningimages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.144. io.metal3.v1alpha1.ProvisioningList schema Description ProvisioningList is a list of Provisioning Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Provisioning) List of provisionings. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.145. io.openshift.apiserver.v1.APIRequestCountList schema Description APIRequestCountList is a list of APIRequestCount Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIRequestCount) List of apirequestcounts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.146. io.openshift.authorization.v1.RoleBindingRestrictionList schema Description RoleBindingRestrictionList is a list of RoleBindingRestriction Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (RoleBindingRestriction) List of rolebindingrestrictions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.147. io.openshift.autoscaling.v1.ClusterAutoscalerList schema Description ClusterAutoscalerList is a list of ClusterAutoscaler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterAutoscaler) List of clusterautoscalers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.148. io.openshift.autoscaling.v1beta1.MachineAutoscalerList schema Description MachineAutoscalerList is a list of MachineAutoscaler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineAutoscaler) List of machineautoscalers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.149. io.openshift.cloudcredential.v1.CredentialsRequestList schema Description CredentialsRequestList is a list of CredentialsRequest Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CredentialsRequest) List of credentialsrequests. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.150. io.openshift.config.v1.APIServerList schema Description APIServerList is a list of APIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (APIServer) List of apiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.151. io.openshift.config.v1.AuthenticationList schema Description AuthenticationList is a list of Authentication Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Authentication) List of authentications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.152. io.openshift.config.v1.BuildList schema Description BuildList is a list of Build Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Build) List of builds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.153. io.openshift.config.v1.ClusterOperatorList schema Description ClusterOperatorList is a list of ClusterOperator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterOperator) List of clusteroperators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.154. io.openshift.config.v1.ClusterVersionList schema Description ClusterVersionList is a list of ClusterVersion Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterVersion) List of clusterversions. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.155. io.openshift.config.v1.ConsoleList schema Description ConsoleList is a list of Console Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Console) List of consoles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.156. io.openshift.config.v1.DNSList schema Description DNSList is a list of DNS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNS) List of dnses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.157. io.openshift.config.v1.FeatureGateList schema Description FeatureGateList is a list of FeatureGate Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (FeatureGate) List of featuregates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.158. io.openshift.config.v1.ImageContentPolicyList schema Description ImageContentPolicyList is a list of ImageContentPolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageContentPolicy) List of imagecontentpolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.159. io.openshift.config.v1.ImageDigestMirrorSetList schema Description ImageDigestMirrorSetList is a list of ImageDigestMirrorSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageDigestMirrorSet) List of imagedigestmirrorsets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.160. io.openshift.config.v1.ImageList schema Description ImageList is a list of Image Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Image) List of images. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.161. io.openshift.config.v1.ImageTagMirrorSetList schema Description ImageTagMirrorSetList is a list of ImageTagMirrorSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageTagMirrorSet) List of imagetagmirrorsets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.162. io.openshift.config.v1.InfrastructureList schema Description InfrastructureList is a list of Infrastructure Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Infrastructure) List of infrastructures. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.163. io.openshift.config.v1.IngressList schema Description IngressList is a list of Ingress Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Ingress) List of ingresses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.164. io.openshift.config.v1.NetworkList schema Description NetworkList is a list of Network Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Network) List of networks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.165. io.openshift.config.v1.NodeList schema Description NodeList is a list of Node Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Node) List of nodes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.166. io.openshift.config.v1.OAuthList schema Description OAuthList is a list of OAuth Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OAuth) List of oauths. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.167. io.openshift.config.v1.OperatorHubList schema Description OperatorHubList is a list of OperatorHub Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorHub) List of operatorhubs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.168. io.openshift.config.v1.ProjectList schema Description ProjectList is a list of Project Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Project) List of projects. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.169. io.openshift.config.v1.ProxyList schema Description ProxyList is a list of Proxy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Proxy) List of proxies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.170. io.openshift.config.v1.SchedulerList schema Description SchedulerList is a list of Scheduler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Scheduler) List of schedulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.171. io.openshift.console.v1.ConsoleCLIDownloadList schema Description ConsoleCLIDownloadList is a list of ConsoleCLIDownload Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleCLIDownload) List of consoleclidownloads. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.172. io.openshift.console.v1.ConsoleExternalLogLinkList schema Description ConsoleExternalLogLinkList is a list of ConsoleExternalLogLink Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleExternalLogLink) List of consoleexternalloglinks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.173. io.openshift.console.v1.ConsoleLinkList schema Description ConsoleLinkList is a list of ConsoleLink Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleLink) List of consolelinks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.174. io.openshift.console.v1.ConsoleNotificationList schema Description ConsoleNotificationList is a list of ConsoleNotification Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleNotification) List of consolenotifications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.175. io.openshift.console.v1.ConsolePluginList schema Description ConsolePluginList is a list of ConsolePlugin Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsolePlugin) List of consoleplugins. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.176. io.openshift.console.v1.ConsoleQuickStartList schema Description ConsoleQuickStartList is a list of ConsoleQuickStart Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleQuickStart) List of consolequickstarts. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.177. io.openshift.console.v1.ConsoleSampleList schema Description ConsoleSampleList is a list of ConsoleSample Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleSample) List of consolesamples. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.178. io.openshift.console.v1.ConsoleYAMLSampleList schema Description ConsoleYAMLSampleList is a list of ConsoleYAMLSample Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ConsoleYAMLSample) List of consoleyamlsamples. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.179. io.openshift.helm.v1beta1.HelmChartRepositoryList schema Description HelmChartRepositoryList is a list of HelmChartRepository Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (HelmChartRepository) List of helmchartrepositories. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.180. io.openshift.helm.v1beta1.ProjectHelmChartRepositoryList schema Description ProjectHelmChartRepositoryList is a list of ProjectHelmChartRepository Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ProjectHelmChartRepository) List of projecthelmchartrepositories. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.181. io.openshift.machine.v1.ControlPlaneMachineSetList schema Description ControlPlaneMachineSetList is a list of ControlPlaneMachineSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControlPlaneMachineSet) List of controlplanemachinesets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.182. io.openshift.machine.v1beta1.MachineHealthCheckList schema Description MachineHealthCheckList is a list of MachineHealthCheck Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineHealthCheck) List of machinehealthchecks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.183. io.openshift.machine.v1beta1.MachineList schema Description MachineList is a list of Machine Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Machine) List of machines. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.184. io.openshift.machine.v1beta1.MachineSetList schema Description MachineSetList is a list of MachineSet Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineSet) List of machinesets. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.185. io.openshift.machineconfiguration.v1.ContainerRuntimeConfigList schema Description ContainerRuntimeConfigList is a list of ContainerRuntimeConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ContainerRuntimeConfig) List of containerruntimeconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.186. io.openshift.machineconfiguration.v1.ControllerConfigList schema Description ControllerConfigList is a list of ControllerConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ControllerConfig) List of controllerconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.187. io.openshift.machineconfiguration.v1.KubeletConfigList schema Description KubeletConfigList is a list of KubeletConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeletConfig) List of kubeletconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.188. io.openshift.machineconfiguration.v1.MachineConfigList schema Description MachineConfigList is a list of MachineConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfig) List of machineconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.189. io.openshift.machineconfiguration.v1.MachineConfigPoolList schema Description MachineConfigPoolList is a list of MachineConfigPool Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfigPool) List of machineconfigpools. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.190. io.openshift.machineconfiguration.v1alpha1.MachineConfigNodeList schema Description MachineConfigNodeList is a list of MachineConfigNode Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfigNode) List of machineconfignodes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.191. io.openshift.monitoring.v1.AlertingRuleList schema Description AlertingRuleList is a list of AlertingRule Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertingRule) List of alertingrules. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.192. io.openshift.monitoring.v1.AlertRelabelConfigList schema Description AlertRelabelConfigList is a list of AlertRelabelConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AlertRelabelConfig) List of alertrelabelconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.193. io.openshift.network.cloud.v1.CloudPrivateIPConfigList schema Description CloudPrivateIPConfigList is a list of CloudPrivateIPConfig Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CloudPrivateIPConfig) List of cloudprivateipconfigs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.194. io.openshift.operator.controlplane.v1alpha1.PodNetworkConnectivityCheckList schema Description PodNetworkConnectivityCheckList is a list of PodNetworkConnectivityCheck Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PodNetworkConnectivityCheck) List of podnetworkconnectivitychecks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.195. io.openshift.operator.imageregistry.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.196. io.openshift.operator.imageregistry.v1.ImagePrunerList schema Description ImagePrunerList is a list of ImagePruner Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImagePruner) List of imagepruners. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.197. io.openshift.operator.ingress.v1.DNSRecordList schema Description DNSRecordList is a list of DNSRecord Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNSRecord) List of dnsrecords. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.198. io.openshift.operator.network.v1.EgressRouterList schema Description EgressRouterList is a list of EgressRouter Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressRouter) List of egressrouters. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.199. io.openshift.operator.network.v1.OperatorPKIList schema Description OperatorPKIList is a list of OperatorPKI Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OperatorPKI) List of operatorpkis. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.200. io.openshift.operator.samples.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.201. io.openshift.operator.v1.AuthenticationList schema Description AuthenticationList is a list of Authentication Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Authentication) List of authentications. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.202. io.openshift.operator.v1.CloudCredentialList schema Description CloudCredentialList is a list of CloudCredential Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CloudCredential) List of cloudcredentials. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.203. io.openshift.operator.v1.ClusterCSIDriverList schema Description ClusterCSIDriverList is a list of ClusterCSIDriver Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterCSIDriver) List of clustercsidrivers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.204. io.openshift.operator.v1.ConfigList schema Description ConfigList is a list of Config Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Config) List of configs. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.205. io.openshift.operator.v1.ConsoleList schema Description ConsoleList is a list of Console Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Console) List of consoles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.206. io.openshift.operator.v1.CSISnapshotControllerList schema Description CSISnapshotControllerList is a list of CSISnapshotController Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (CSISnapshotController) List of csisnapshotcontrollers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.207. io.openshift.operator.v1.DNSList schema Description DNSList is a list of DNS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (DNS) List of dnses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.208. io.openshift.operator.v1.EtcdList schema Description EtcdList is a list of Etcd Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Etcd) List of etcds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.209. io.openshift.operator.v1.IngressControllerList schema Description IngressControllerList is a list of IngressController Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (IngressController) List of ingresscontrollers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.210. io.openshift.operator.v1.InsightsOperatorList schema Description InsightsOperatorList is a list of InsightsOperator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (InsightsOperator) List of insightsoperators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.211. io.openshift.operator.v1.KubeAPIServerList schema Description KubeAPIServerList is a list of KubeAPIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeAPIServer) List of kubeapiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.212. io.openshift.operator.v1.KubeControllerManagerList schema Description KubeControllerManagerList is a list of KubeControllerManager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeControllerManager) List of kubecontrollermanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.213. io.openshift.operator.v1.KubeSchedulerList schema Description KubeSchedulerList is a list of KubeScheduler Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeScheduler) List of kubeschedulers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.214. io.openshift.operator.v1.KubeStorageVersionMigratorList schema Description KubeStorageVersionMigratorList is a list of KubeStorageVersionMigrator Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (KubeStorageVersionMigrator) List of kubestorageversionmigrators. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.215. io.openshift.operator.v1.MachineConfigurationList schema Description MachineConfigurationList is a list of MachineConfiguration Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (MachineConfiguration) List of machineconfigurations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.216. io.openshift.operator.v1.NetworkList schema Description NetworkList is a list of Network Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Network) List of networks. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.217. io.openshift.operator.v1.OpenShiftAPIServerList schema Description OpenShiftAPIServerList is a list of OpenShiftAPIServer Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OpenShiftAPIServer) List of openshiftapiservers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.218. io.openshift.operator.v1.OpenShiftControllerManagerList schema Description OpenShiftControllerManagerList is a list of OpenShiftControllerManager Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (OpenShiftControllerManager) List of openshiftcontrollermanagers. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.219. io.openshift.operator.v1.ServiceCAList schema Description ServiceCAList is a list of ServiceCA Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ServiceCA) List of servicecas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.220. io.openshift.operator.v1.StorageList schema Description StorageList is a list of Storage Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Storage) List of storages. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.221. io.openshift.operator.v1alpha1.ImageContentSourcePolicyList schema Description ImageContentSourcePolicyList is a list of ImageContentSourcePolicy Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ImageContentSourcePolicy) List of imagecontentsourcepolicies. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.222. io.openshift.performance.v2.PerformanceProfileList schema Description PerformanceProfileList is a list of PerformanceProfile Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (PerformanceProfile) List of performanceprofiles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.223. io.openshift.quota.v1.ClusterResourceQuotaList schema Description ClusterResourceQuotaList is a list of ClusterResourceQuota Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (ClusterResourceQuota) List of clusterresourcequotas. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.224. io.openshift.security.v1.SecurityContextConstraintsList schema Description SecurityContextConstraintsList is a list of SecurityContextConstraints Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (SecurityContextConstraints) List of securitycontextconstraints. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.225. io.openshift.tuned.v1.ProfileList schema Description ProfileList is a list of Profile Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Profile) List of profiles. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.226. io.openshift.tuned.v1.TunedList schema Description TunedList is a list of Tuned Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Tuned) List of tuneds. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.227. io.x-k8s.cluster.infrastructure.v1beta1.Metal3RemediationList schema Description Metal3RemediationList is a list of Metal3Remediation Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Metal3Remediation) List of metal3remediations. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.228. io.x-k8s.cluster.infrastructure.v1beta1.Metal3RemediationTemplateList schema Description Metal3RemediationTemplateList is a list of Metal3RemediationTemplate Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (Metal3RemediationTemplate) List of metal3remediationtemplates. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.229. org.ovn.k8s.v1.AdminPolicyBasedExternalRouteList schema Description AdminPolicyBasedExternalRouteList is a list of AdminPolicyBasedExternalRoute Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (AdminPolicyBasedExternalRoute) List of adminpolicybasedexternalroutes. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.230. org.ovn.k8s.v1.EgressFirewallList schema Description EgressFirewallList is a list of EgressFirewall Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressFirewall) List of egressfirewalls. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.231. org.ovn.k8s.v1.EgressIPList schema Description EgressIPList is a list of EgressIP Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressIP) List of egressips. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.232. org.ovn.k8s.v1.EgressQoSList schema Description EgressQoSList is a list of EgressQoS Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressQoS) List of egressqoses. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds 1.233. org.ovn.k8s.v1.EgressServiceList schema Description EgressServiceList is a list of EgressService Type object Required items Schema Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources items array (EgressService) List of egressservices. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ListMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
[ "<quantity> ::= <signedNumber><suffix>", "(Note that <suffix> may be empty, from the \"\" case in <decimalSI>.)", "(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html)", "(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.)", "type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.Object `json:\"myPlugin\"` }", "type PluginA struct { AOption string `json:\"aOption\"` }", "type MyAPIObject struct { runtime.TypeMeta `json:\",inline\"` MyPlugin runtime.RawExtension `json:\"myPlugin\"` }", "type PluginA struct { AOption string `json:\"aOption\"` }", "{ \"kind\":\"MyAPIObject\", \"apiVersion\":\"v1\", \"myPlugin\": { \"kind\":\"PluginA\", \"aOption\":\"foo\", }, }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/common_object_reference/api-object-reference
Chapter 12. Measuring scheduling latency using rtla-osnoise in RHEL for Real Time
Chapter 12. Measuring scheduling latency using rtla-osnoise in RHEL for Real Time An ultra-low latency is an environment that is optimized to process high volumes of data packets with low tolerance for delay. Providing exclusive resources to applications, including the CPU, is a prevalent practice in ultra-low-latency environments. For example, for high performance network processing in network functions virtualization (NFV) applications, a single application has the CPU power limit set to run tasks continuously. The Linux kernel includes the real-time analysis ( rtla ) tool, which provides an interface for the operating system noise ( osnoise ) tracer. The operating system noise is the interference that occurs in an application as a result of activities inside the operating system. Linux systems can experience noise due to: Non maskable interrupts (NMIs) Interrupt requests (IRQs) Soft interrupt requests (SoftIRQs) Other system threads activity Hardware-related jobs, such as non maskable high priority system management interrupts (SMIs) 12.1. The rtla-osnoise tracer The Linux kernel includes the real-time analysis ( rtla ) tool, which provides an interface for the operating system noise ( osnoise ) tracer. The rtla-osnoise tracer creates a thread that runs periodically for a specified given period. At the start of a period , the thread disables interrupts, starts sampling, and captures the time in a loop. The rtla-osnoise tracer provides the following capabilities: Measure how much operating noise a CPU receives. Characterize the type of operating system noise occurring in the CPU. Print optimized trace reports that help to define the root cause of unexpected results. Saves an interference counter for each interference source. The interference counter for non maskable interrupts (NMIs), interrupt requests (IRQs), software interrupt requests (SoftIRQs), and threads increase when the tool detects the entry events for these interferences. The rtla-osnoise tracer prints a run report with the following information about the noise sources at the conclusion of the period: Total amount of noise. The maximum amount of noise. The percentage of CPU that is allocated to the thread. The counters for the noise sources. 12.2. Configuring the rtla-osnoise tracer to measure scheduling latency You can configure the rtla-osnoise tracer by adding osnoise in the curret_tracer file of the tracing system. The current_tracer file is generally mounted in the /sys/kernel/tracing/ directory. The rtla-osnoise tracer measures the interrupt requests (IRQs) and saves the trace output for analysis when a thread latency is more than 20 microseconds for a single noise occurrence. Procedure List the current tracer: The no operations ( nop ) is the default tracer. Add the timerlat tracer in the current_tracer file of the tracing system: Generate the tracing output: 12.3. The rtla-osnoise options for configuration The configuration options for the rtla-osnoise tracer is available in the /sys/kernel/tracing/ directory. Configuration options for rtla-osnoise osnoise/cpus Configures the CPUs for the osnoise thread to run on. osnoise/period_us Configures the period for a osnoise thread run. osnoise/runtime_us Configures the run duration for a osnoise thread. osnoise/stop_tracing_us Stops the system tracing if a single noise is more than the configured value. Setting 0 disables this option. osnoise/stop_tracing_total_us Stops the system tracing if the total noise is more than the configured value. Setting 0 disables this option. tracing_thresh Sets the minimum delta between two time() call reads to be considered as noise, in microseconds. When set to 0 , tracing_thresh uses the default value, which is 5 microseconds. 12.4. The rtla-osnoise tracepoints The rtla-osnoise includes a set of tracepoints to identify the source of the operating system noise ( osnoise ). Trace points for rtla-osnoise osnoise:sample_threshold Displays a noise when the noise is more than the configured threshold ( tolerance_ns ). osnoise:nmi_noise Displays noise and the noise duration from non maskable interrupts (NMIs). osnoise:irq_noise Displays noise and the noise duration from interrupt requests (IRQs). osnoise:softirq_noise Displays noise and the noise duration from soft interrupt requests (SoftIRQs), osnoise:thread_noise Displays noise and the noise duration from a thread. 12.5. The rtla-osnoise tracer options The osnoise/options file includes a set of on and off configuration options for the rtla-osnoise tracer. Options for rtla-osnoise DEFAULTS Resets the options to the default value. OSNOISE_WORKLOAD Stops the osnoise workload dispatch. PANIC_ON_STOP Sets the panic() call if the tracer stops. This option captures a vmcore dump file. OSNOISE_PREEMPT_DISABLE Disables preemption for osnoise workloads, which allows only interrupt requests (IRQs) and hardware-related noise. OSNOISE_IRQ_DISABLE Disables interrupt requests (IRQs) for osnoise workloads, which allows only non maskable interrupts (NMIs) and hardware-related noise. 12.6. Measuring operating system noise with the rtla-osnoise-top tracer The rtla osnoise-top tracer measures and prints a periodic summary from the osnoise tracer along with the information about the occurrence counters of the interference source. Procedure Measure the system noise: The command output displays a periodic summary with information about the real-time priority, the assigned CPUs to run the thread, and the period of the run in microseconds. 12.7. The rtla-osnoise-top tracer options By using the rtla osnoise top --help command, you can view the help usage on the available options for the rtla-osnoise-top tracer. Options for rtla-osnoise-top -a, --auto us Sets the automatic trace mode. This mode sets some commonly used options while debugging the system. It is equivalent to use -s us -T 1 and -t . -p, --period us Sets the osnoise tracer duration period in microseconds. -r, --runtime us Sets the osnoise tracer runtime in microseconds. -s, --stop us Stops the trace if a single sample is more than the argument in microseconds. With -t , the command saves the trace to the output. -S, --stop-total us Stops the trace if the total sample is more than the argument in microseconds. With -T , the command saves a trace to the output. -T, --threshold us Specifies the minimum delta between two time reads to be considered noise. The default threshold is 5 us. -q, --quiet Prints only a summary at the end of a run. -c, --cpus cpu-list Sets the osnoise tracer to run the sample threads on the assigned cpu-list . -d, --duration time[s|m|h|d] Sets the duration of a run. -D, --debug Prints debug information. -t, --trace[=file] Saves the stopped trace to [file|osnoise_trace.txt] file. -e, --event sys:event Enables an event in the trace ( -t ) session. The argument can be a specific event, for example -e sched:sched_switch , or all events of a system group, such as -e sched system group. --filter <filter> Filters the -e sys:event system event with a filter expression. --trigger <trigger> Enables a trace event trigger to the -e sys:event system event. -P, --priority o:prio|r:prio|f:prio|d:runtime:period Sets the scheduling parameters to the osnoise tracer threads. -h, --help Prints the help menu.
[ "cat /sys/kernel/tracing/current_tracer nop", "cd /sys/kernel/tracing/ echo osnoise > current_tracer", "cat trace tracer: osnoise", "rtla osnoise top -P F:1 -c 0-3 -r 900000 -d 1M -q" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/measuring-scheduling-latency-using-rtla-osnoise-in-rhel-for-real-time_optimizing-rhel9-for-real-time-for-low-latency-operation
Chapter 3. Tasks
Chapter 3. Tasks 3.1. Register Repository on New Server 3.1.1. Register Repository on a New Server You can register a repository on a new server through the ModeShape view. Open ModeShape View To open the ModeShape view navigate to Window -> Show View -> Other. From the Show View dialog, select the ModeShape folder followed by the ModeShape view and click Open . Figure 3.1. Selecting the ModeShape view Add ModeShape Repository To add a ModeShape repository, click the Create a new server icon that appears in the ModeShape view. This view is located in the lower section of your interface, along with other views such as Servers and Console. Figure 3.2. Adding a new server for ModeShape repositories The New Server dialog will appear. Enter the URL of the server to connect and your authentication information in the New Server dialog. You can test your connection to the server by clicking the Test button. Figure 3.3. The New Server dialog Click the Finish button to add the server to the ModeShape view. Note Upon testing the connection, if a connection cannot be established, the ModeShape server can still be created. Use the Reconnect button on the ModeShape View's toolbar to try and connect again at a later time. Result Once the server with the ModeShape repository has been added three new options become available within the ModeShape view. 3.1.2. ModeShape Repository Options After the repository is added to the new server as described in Section 3.1.1, "Register Repository on a New Server" , three new options appear within the ModeShape view. These options allow you to: edit server properties, delete a server from the server registry, and reconnect to the selected server. To perform one of these actions either right-click on a server and select from the presented menu of actions, or use the buttons beside the Create a new server icon. Note It is possible for a ModeShape server instance to have numerous ModeShape repositories stored on it. Once you have registered a connection to the server you will have access to all ModeShape repositories on the server. You do not need to register a new connection for each repository on the same server.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/chap-tasks
1.4. A Look at the Token Management System (TMS)
1.4. A Look at the Token Management System (TMS) Certificate System creates, manages, renews, and revokes certificates, and it also archives and recovers keys. For organizations which use smart cards, the Certificate System has a token management system - a collection of subsystems with established relationships - to generate keys and requests and receive certificates to be used for smart cards. For information on this topic, see the following sections in the Red Hat Certificate System Planning, Installation, and Deployment Guide : Working with Smart Cards (TMS) Using Smart Cards
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/overview-tms
Chapter 25. Using control groups version 1 with systemd
Chapter 25. Using control groups version 1 with systemd You can manage cgroups with the systemd system and service manager and the utilities they provide. This is also the preferred way of the cgroups management. 25.1. Role of systemd in control groups version 1 RHEL 8 moves the resource management settings from the process level to the application level by binding the system of cgroup hierarchies with the systemd unit tree. Therefore, you can manage the system resources with the systemctl command, or by modifying the systemd unit files. By default, the systemd system and service manager use the slice , scope and service units to organize and structure processes in the control groups. The systemctl command can further modify this structure by creating custom slices . systemd also automatically mounts hierarchies for important kernel resource controllers in the /sys/fs/cgroup/ directory. Three systemd unit types are used for resource control: Service - A process or a group of processes, which systemd started according to a unit configuration file. Services encapsulate the specified processes to be started and stopped as one set. Services are named in the following way: Scope - A group of externally created processes. Scopes encapsulate processes that are started and stopped by the arbitrary processes through the fork() function and then registered by systemd at runtime. For example, user sessions, containers, and virtual machines are treated as scopes. Scopes are named as follows: Slice - A group of hierarchically organized units. Slices organize a hierarchy in which scopes and services are placed. The actual processes are included in scopes or in services. Every name of a slice unit corresponds to the path to a location in the hierarchy. The dash ("-") character acts as a separator of the path components to a slice from the -.slice root slice. In the following example, services and scopes that contain processes are placed in slices that do not have processes of their own: parent-name.slice is a sub-slice of parent.slice , which is a sub-slice of the -.slice root slice. parent-name.slice can have its own sub-slice named parent-name-name2.slice , and so on. The service , the scope , and the slice units directly map to objects in the control group hierarchy. When these units are activated, they map directly to control group paths built from the unit names. Example of a control group hierarchy The services and scopes containing processes are placed in slices that do not have processes of their own. Additional resources systemd.resource-control(5) , cgroups(7) , fork() , fork(2) manual pages 25.2. Creating transient control groups The transient cgroups set limits on resources consumed by a unit (service or scope) during its runtime. Procedure To create a transient control group, use the systemd-run command in the following format: This command creates and starts a transient service or a scope unit and runs a custom command in such a unit. The --unit=<name> option gives a name to the unit. If --unit is not specified, the name is generated automatically. The --slice=< name >.slice option makes your service or scope unit a member of a specified slice. Replace < name >.slice with the name of an existing slice (as shown in the output of systemctl -t slice ), or create a new slice by passing a unique name. By default, services and scopes are created as members of the system.slice . Replace < command > with the command you want to enter in the service or the scope unit. The following message is displayed to confirm that you created and started the service or the scope successfully: Optional : Keep the unit running after its processes finished to collect runtime information: The command creates and starts a transient service unit and runs a custom command in the unit. The --remain-after-exit option ensures that the service keeps running after its processes have finished. Additional resources The systemd-run(1) manual page 25.3. Creating persistent control groups To assign a persistent control group to a service, you need to edit its unit configuration file. The configuration is preserved after the system reboot to manage services that started automatically. Procedure To create a persistent control group, enter: This command automatically creates a unit configuration file into the /usr/lib/systemd/system/ directory and by default, it assigns < name >.service to the system.slice unit. Additional resources systemd-run(1) manual page 25.4. Configuring memory resource control settings on the command-line Executing commands in the command-line interface is one of the ways how to set limits, prioritize, or control access to hardware resources for groups of processes. Procedure To limit the memory usage of a service, run the following: The command instantly assigns the memory limit of 1,500 KB to processes executed in a control group the example.service service belongs to. The MemoryMax parameter, in this configuration variant, is defined in the /etc/systemd/system.control/example.service.d/50-MemoryMax.conf file and controls the value of the /sys/fs/cgroup/memory/system.slice/example.service/memory.limit_in_bytes file. Optionally, to temporarily limit the memory usage of a service, run: The command instantly assigns the memory limit to the example.service service. The MemoryMax parameter is defined until the reboot in the /run/systemd/system.control/example.service.d/50-MemoryMax.conf file. With a reboot, the whole /run/systemd/system.control/ directory and MemoryMax are removed. Note The 50-MemoryMax.conf file stores the memory limit as a multiple of 4096 bytes - one kernel page size specific for AMD64 and Intel 64. The actual number of bytes depends on a CPU architecture. Additional resources systemd.resource-control(5) and cgroups(7) manual pages Role of systemd in control groups 25.5. Configuring memory resource control settings with unit files Each persistent unit is supervised by the systemd system and service manager, and has a unit configuration file in the /usr/lib/systemd/system/ directory. To change the resource control settings of the persistent units, modify its unit configuration file either manually in a text editor or from the command-line interface. Manually modifying unit files is one of the ways how to set limits, prioritize, or control access to hardware resources for groups of processes. Procedure To limit the memory usage of a service, modify the /usr/lib/systemd/system/example.service file as follows: This configuration places a limit on maximum memory consumption of processes executed in a control group, which example.service is a part of. Note Use suffixes K, M, G, or T to identify Kilobyte, Megabyte, Gigabyte, or Terabyte as a unit of measurement. Reload all unit configuration files: Restart the service: Reboot the system. Verification Check that the changes took effect: The memory consumption was limited to approximately 1,500 KB. Note The memory.limit_in_bytes file stores the memory limit as a multiple of 4096 bytes - one kernel page size specific for AMD64 and Intel 64. The actual number of bytes depends on a CPU architecture. Additional resources systemd.resource-control(5) , cgroups(7) manual pages Managing system services with systemctl in RHEL 25.6. Removing transient control groups You can use the systemd system and service manager to remove transient control groups ( cgroups ) if you no longer need to limit, prioritize, or control access to hardware resources for groups of processes. Transient cgroups are automatically released when all the processes that a service or a scope unit contains finish. Procedure To stop the service unit with all its processes, enter: To terminate one or more of the unit processes, enter: The command uses the --kill-who option to select process(es) from the control group you want to terminate. To kill multiple processes at the same time, pass a comma-separated list of PIDs. The --signal option determines the type of POSIX signal to be sent to the specified processes. The default signal is SIGTERM . Additional resources What are control groups What are kernel resource controllers systemd.resource-control(5) and cgroups(7) man pages on your system Role of systemd in control groups version 1 Managing systemd in RHEL 25.7. Removing persistent control groups You can use the systemd system and service manager to remove persistent control groups ( cgroups ) if you no longer need to limit, prioritize, or control access to hardware resources for groups of processes. Persistent cgroups are released when a service or a scope unit is stopped or disabled and its configuration file is deleted. Procedure Stop the service unit: Disable the service unit: Remove the relevant unit configuration file: Reload all unit configuration files so that changes take effect: Additional resources systemd.resource-control(5) , cgroups(7) , and systemd.kill(5) manual pages 25.8. Listing systemd units Use the systemd system and service manager to list its units. Procedure List all active units on the system with the systemctl utility. The terminal returns an output similar to the following example: UNIT A name of a unit that also reflects the unit position in a control group hierarchy. The units relevant for resource control are a slice , a scope , and a service . LOAD Indicates whether the unit configuration file was properly loaded. If the unit file failed to load, the field provides the state error instead of loaded . Other unit load states are: stub , merged , and masked . ACTIVE The high-level unit activation state, which is a generalization of SUB . SUB The low-level unit activation state. The range of possible values depends on the unit type. DESCRIPTION The description of the unit content and functionality. List all active and inactive units: Limit the amount of information in the output: The --type option requires a comma-separated list of unit types such as a service and a slice , or unit load states such as loaded and masked . Additional resources Managing system services with systemctl in RHEL The systemd.resource-control(5) , systemd.exec(5) manual pages 25.9. Viewing systemd cgroups hierarchy Display control groups ( cgroups ) hierarchy and processes running in specific cgroups . Procedure Display the whole cgroups hierarchy on your system with the systemd-cgls command. The example output returns the entire cgroups hierarchy, where the highest level is formed by slices . Display the cgroups hierarchy filtered by a resource controller with the systemd-cgls < resource_controller > command. The example output lists the services that interact with the selected controller. Display detailed information about a certain unit and its part of the cgroups hierarchy with the systemctl status < system_unit > command. Additional resources systemd.resource-control(5) and cgroups(7) man pages on your system 25.10. Viewing resource controllers Identify the processes that use resource controllers. Procedure View which resource controllers a process interacts with, enter the cat proc/< PID >/cgroup command. The example output is of the process PID 11269 , which belongs to the example.service unit. You can verify the process was placed in a correct control group as defined by the systemd unit file specifications. Note By default, the items and their ordering in the list of resource controllers is the same for all units started by systemd , since it automatically mounts all the default resource controllers. Additional resources The cgroups(7) manual page Documentation in the /usr/share/doc/kernel-doc-<kernel_version>/Documentation/cgroups-v1/ directory 25.11. Monitoring resource consumption View a list of currently running control groups ( cgroups ) and their resource consumption in real-time. Procedure Display a dynamic account of currently running cgroups with the systemd-cgtop command. The example output displays currently running cgroups ordered by their resource usage (CPU, memory, disk I/O load). The list refreshes every 1 second by default. Therefore, it offers a dynamic insight into the actual resource usage of each control group. Additional resources The systemd-cgtop(1) manual page
[ "< name >.service", "< name >.scope", "< parent-name >.slice", "Control group /: -.slice β”œβ”€user.slice β”‚ β”œβ”€user-42.slice β”‚ β”‚ β”œβ”€session-c1.scope β”‚ β”‚ β”‚ β”œβ”€ 967 gdm-session-worker [pam/gdm-launch-environment] β”‚ β”‚ β”‚ β”œβ”€1035 /usr/libexec/gdm-x-session gnome-session --autostart /usr/share/gdm/greeter/autostart β”‚ β”‚ β”‚ β”œβ”€1054 /usr/libexec/Xorg vt1 -displayfd 3 -auth /run/user/42/gdm/Xauthority -background none -noreset -keeptty -verbose 3 β”‚ β”‚ β”‚ β”œβ”€1212 /usr/libexec/gnome-session-binary --autostart /usr/share/gdm/greeter/autostart β”‚ β”‚ β”‚ β”œβ”€1369 /usr/bin/gnome-shell β”‚ β”‚ β”‚ β”œβ”€1732 ibus-daemon --xim --panel disable β”‚ β”‚ β”‚ β”œβ”€1752 /usr/libexec/ibus-dconf β”‚ β”‚ β”‚ β”œβ”€1762 /usr/libexec/ibus-x11 --kill-daemon β”‚ β”‚ β”‚ β”œβ”€1912 /usr/libexec/gsd-xsettings β”‚ β”‚ β”‚ β”œβ”€1917 /usr/libexec/gsd-a11y-settings β”‚ β”‚ β”‚ β”œβ”€1920 /usr/libexec/gsd-clipboard ... β”œβ”€init.scope β”‚ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 └─system.slice β”œβ”€rngd.service β”‚ └─800 /sbin/rngd -f β”œβ”€systemd-udevd.service β”‚ └─659 /usr/lib/systemd/systemd-udevd β”œβ”€chronyd.service β”‚ └─823 /usr/sbin/chronyd β”œβ”€auditd.service β”‚ β”œβ”€761 /sbin/auditd β”‚ └─763 /usr/sbin/sedispatch β”œβ”€accounts-daemon.service β”‚ └─876 /usr/libexec/accounts-daemon β”œβ”€example.service β”‚ β”œβ”€ 929 /bin/bash /home/jdoe/example.sh β”‚ └─4902 sleep 1 ...", "systemd-run --unit= <name> --slice= <name> .slice <command>", "Running as unit <name> .service", "systemd-run --unit= <name> --slice= <name> .slice --remain-after-exit <command>", "systemctl enable < name >.service", "systemctl set-property example.service MemoryMax=1500K", "systemctl set-property --runtime example.service MemoryMax=1500K", "... [Service] MemoryMax=1500K ...", "systemctl daemon-reload", "systemctl restart example.service", "cat /sys/fs/cgroup/memory/system.slice/example.service/memory.limit_in_bytes 1536000", "systemctl stop < name >.service", "systemctl kill < name >.service --kill-who= PID,... --signal=< signal >", "systemctl stop < name >.service", "systemctl disable < name >.service", "rm /usr/lib/systemd/system/< name >.service", "systemctl daemon-reload", "systemctl UNIT LOAD ACTIVE SUB DESCRIPTION ... init.scope loaded active running System and Service Manager session-2.scope loaded active running Session 2 of user jdoe abrt-ccpp.service loaded active exited Install ABRT coredump hook abrt-oops.service loaded active running ABRT kernel log watcher abrt-vmcore.service loaded active exited Harvest vmcores for ABRT abrt-xorg.service loaded active running ABRT Xorg log watcher ... -.slice loaded active active Root Slice machine.slice loaded active active Virtual Machine and Container Slice system-getty.slice loaded active active system-getty.slice system-lvm2\\x2dpvscan.slice loaded active active system-lvm2\\x2dpvscan.slice system-sshd\\x2dkeygen.slice loaded active active system-sshd\\x2dkeygen.slice system-systemd\\x2dhibernate\\x2dresume.slice loaded active active system-systemd\\x2dhibernate\\x2dresume> system-user\\x2druntime\\x2ddir.slice loaded active active system-user\\x2druntime\\x2ddir.slice system.slice loaded active active System Slice user-1000.slice loaded active active User Slice of UID 1000 user-42.slice loaded active active User Slice of UID 42 user.slice loaded active active User and Session Slice ...", "systemctl --all", "systemctl --type service,masked", "systemd-cgls Control group /: -.slice β”œβ”€user.slice β”‚ β”œβ”€user-42.slice β”‚ β”‚ β”œβ”€session-c1.scope β”‚ β”‚ β”‚ β”œβ”€ 965 gdm-session-worker [pam/gdm-launch-environment] β”‚ β”‚ β”‚ β”œβ”€1040 /usr/libexec/gdm-x-session gnome-session --autostart /usr/share/gdm/greeter/autostart ... β”œβ”€init.scope β”‚ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 └─system.slice ... β”œβ”€example.service β”‚ β”œβ”€6882 /bin/bash /home/jdoe/example.sh β”‚ └─6902 sleep 1 β”œβ”€systemd-journald.service └─629 /usr/lib/systemd/systemd-journald ...", "systemd-cgls memory Controller memory; Control group /: β”œβ”€1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 β”œβ”€user.slice β”‚ β”œβ”€user-42.slice β”‚ β”‚ β”œβ”€session-c1.scope β”‚ β”‚ β”‚ β”œβ”€ 965 gdm-session-worker [pam/gdm-launch-environment] ... └─system.slice | ... β”œβ”€chronyd.service β”‚ └─844 /usr/sbin/chronyd β”œβ”€example.service β”‚ β”œβ”€8914 /bin/bash /home/jdoe/example.sh β”‚ └─8916 sleep 1 ...", "systemctl status example.service ● example.service - My example service Loaded: loaded (/usr/lib/systemd/system/example.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-04-16 12:12:39 CEST; 3s ago Main PID: 17737 (bash) Tasks: 2 (limit: 11522) Memory: 496.0K (limit: 1.5M) CGroup: /system.slice/example.service β”œβ”€17737 /bin/bash /home/jdoe/example.sh └─17743 sleep 1 Apr 16 12:12:39 redhat systemd[1]: Started My example service. Apr 16 12:12:39 redhat bash[17737]: The current time is Tue Apr 16 12:12:39 CEST 2019 Apr 16 12:12:40 redhat bash[17737]: The current time is Tue Apr 16 12:12:40 CEST 2019", "cat /proc/11269/cgroup 12:freezer:/ 11:cpuset:/ 10:devices:/system.slice 9:memory:/system.slice/example.service 8:pids:/system.slice/example.service 7:hugetlb:/ 6:rdma:/ 5:perf_event:/ 4:cpu,cpuacct:/ 3:net_cls,net_prio:/ 2:blkio:/ 1:name=systemd:/system.slice/example.service", "systemd-cgtop Control Group Tasks %CPU Memory Input/s Output/s / 607 29.8 1.5G - - /system.slice 125 - 428.7M - - /system.slice/ModemManager.service 3 - 8.6M - - /system.slice/NetworkManager.service 3 - 12.8M - - /system.slice/accounts-daemon.service 3 - 1.8M - - /system.slice/boot.mount - - 48.0K - - /system.slice/chronyd.service 1 - 2.0M - - /system.slice/cockpit.socket - - 1.3M - - /system.slice/colord.service 3 - 3.5M - - /system.slice/crond.service 1 - 1.8M - - /system.slice/cups.service 1 - 3.1M - - /system.slice/dev-hugepages.mount - - 244.0K - - /system.slice/dev-mapper-rhel\\x2dswap.swap - - 912.0K - - /system.slice/dev-mqueue.mount - - 48.0K - - /system.slice/example.service 2 - 2.0M - - /system.slice/firewalld.service 2 - 28.8M - -" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/using-control-groups-version-1-with-systemd_managing-monitoring-and-updating-the-kernel
Developing C and C++ applications in RHEL 8
Developing C and C++ applications in RHEL 8 Red Hat Enterprise Linux 8 Setting up a developer workstation, and developing and debugging C and C++ applications in Red Hat Enterprise Linux 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/developing_c_and_cpp_applications_in_rhel_8/index
Chapter 1. Hardware Support
Chapter 1. Hardware Support biosdevname The biosdevname package has been upgraded to version 0.3.8, providing the --smbios and --nopirq command line parameters. These parameters allow users to specify a minimum BIOS and to turn off PCI IRQ Routing Table (PIRQ).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/hardware
Monitoring
Monitoring OpenShift Container Platform 4.10 Configuring and using the monitoring stack in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc -n openshift-monitoring get configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: |", "oc apply -f cluster-monitoring-config.yaml", "oc -n openshift-user-workload-monitoring get configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |", "oc apply -f user-workload-monitoring-config.yaml", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: <configuration_for_the_component>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: 1 volumeClaimTemplate: spec: storageClassName: fast volumeMode: Filesystem resources: requests: storage: 40Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: <configuration_for_the_component>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: 1 retention: 24h 2 resources: requests: cpu: 200m 3 memory: 2Gi 4", "oc label nodes <node-name> <node-label>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...>", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 nodeSelector: <node-label-1> 2 <node-label-2> 3 <...>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: tolerations: <toleration_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\"", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: dedicatedServiceMonitors: enabled: true 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: volumeClaimTemplate: spec: storageClassName: <storage_class> resources: requests: storage: <amount_of_storage>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler : volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 10Gi", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 100Gi", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 40Gi", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 volumeClaimTemplate: spec: storageClassName: <storage_class> 2 resources: requests: storage: <amount_of_storage> 3", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 100Gi", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: volumeClaimTemplate: spec: storageClassName: local-storage resources: requests: storage: 20Gi", "for p in USD(oc -n openshift-monitoring get pvc -l app.kubernetes.io/name=prometheus -o jsonpath='{range .items[*]}{.metadata.name} {end}'); do oc -n openshift-monitoring patch pvc/USD{p} --patch '{\"spec\": {\"resources\": {\"requests\": {\"storage\":\"100Gi\"}}}}'; done", "oc delete statefulset -l app.kubernetes.io/name=prometheus --cascade=orphan", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: 24h", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: <time_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: retention: 24h", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: <time_specification> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: retention: 10d", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials>", "basicAuth: username: <usernameSecret> password: <passwordSecret>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" basicAuth: username: name: remoteWriteAuth key: user password: name: remoteWriteAuth key: password", "tlsConfig: ca: <caSecret> cert: <certSecret> keySecret: <keySecret>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" tlsConfig: ca: secret: name: selfsigned-mtls-bundle key: ca.crt cert: secret: name: selfsigned-mtls-bundle key: client.crt keySecret: name: selfsigned-mtls-bundle key: client.key", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials> <write_relabel_configs>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: remoteWrite: - url: \"https://remote-write.endpoint\" writeRelabelConfigs: - sourceLabels: [__name__] regex: 'my_metric' action: keep", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: remoteWrite: - url: \"https://remote-write.endpoint\" <endpoint_authentication_credentials> <write_relabel_configs>", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: enforcedSampleLimit: 50000 1", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: labels: prometheus: k8s role: alert-rules name: monitoring-stack-alerts 1 namespace: ns1 2 spec: groups: - name: general.rules rules: - alert: TargetDown 3 annotations: message: '{{ printf \"%.4g\" USDvalue }}% of the {{ USDlabels.job }}/{{ USDlabels.service }} targets in {{ USDlabels.namespace }} namespace are down.' 4 expr: 100 * (count(up == 0) BY (job, namespace, service) / count(up) BY (job, namespace, service)) > 10 for: 10m 5 labels: severity: warning 6 - alert: ApproachingEnforcedSamplesLimit 7 annotations: message: '{{ USDlabels.container }} container of the {{ USDlabels.pod }} pod in the {{ USDlabels.namespace }} namespace consumes {{ USDvalue | humanizePercentage }} of the samples limit budget.' 8 expr: scrape_samples_scraped/50000 > 0.8 9 for: 10m 10 labels: severity: warning 11", "oc apply -f monitoring-stack-alerts.yaml", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification>", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: 1 logLevel: <log_level> 2", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: queryLogFile: <path> 1", "oc -n openshift-monitoring get pods", "oc -n openshift-monitoring exec prometheus-k8s-0 -- cat <path>", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: queryLogFile: <path> 1", "oc -n openshift-user-workload-monitoring get pods", "oc -n openshift-user-workload-monitoring exec prometheus-user-workload-0 -- cat <path>", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | thanosQuerier: enableRequestLogging: <value> 1 logLevel: <value> 2", "oc -n openshift-monitoring get pods", "token=`oc sa get-token prometheus-k8s -n openshift-monitoring` oc -n openshift-monitoring exec -c prometheus prometheus-k8s-0 -- curl -k -H \"Authorization: Bearer USDtoken\" 'https://thanos-querier.openshift-monitoring.svc:9091/api/v1/query?query=cluster_version'", "oc -n openshift-monitoring logs <thanos_querier_pod_name> -c thanos-query", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | k8sPrometheusAdapter: audit: profile: <audit_log_level> 1", "oc -n openshift-monitoring get pods", "oc -n openshift-monitoring get deploy prometheus-adapter -o yaml", "- --audit-policy-file=/etc/audit/request-profile.yaml - --audit-log-path=/var/log/adapter/audit.log", "oc -n openshift-monitoring exec deploy/prometheus-adapter -c prometheus-adapter -- cat /etc/audit/request-profile.yaml", "\"apiVersion\": \"audit.k8s.io/v1\" \"kind\": \"Policy\" \"metadata\": \"name\": \"Request\" \"omitStages\": - \"RequestReceived\" \"rules\": - \"level\": \"Request\"", "oc -n openshift-monitoring exec -c <prometheus_adapter_pod_name> -- cat /var/log/adapter/audit.log", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | grafana: enabled: false", "oc -n openshift-monitoring get pods", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | alertmanagerMain: enabled: false", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1", "oc -n openshift-user-workload-monitoring get pod", "NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h", "oc policy add-role-to-user <role> <user> -n <namespace> 1", "oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring", "SECRET=`oc get secret -n openshift-user-workload-monitoring | grep prometheus-user-workload-token | head -n 1 | awk '{print USD1 }'`", "TOKEN=`echo USD(oc get secret USDSECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token') | base64 -d`", "THANOS_QUERIER_HOST=`oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host'`", "NAMESPACE=ns1", "curl -X GET -kG \"https://USDTHANOS_QUERIER_HOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\" -H \"Authorization: Bearer USDTOKEN\"", "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[{\"metric\":{\"__name__\":\"up\",\"endpoint\":\"web\",\"instance\":\"10.129.0.46:8080\",\"job\":\"prometheus-example-app\",\"namespace\":\"ns1\",\"pod\":\"prometheus-example-app-68d47c4fb6-jztp2\",\"service\":\"prometheus-example-app\"},\"value\":[1591881154.748,\"1\"]}]}}", "oc label namespace my-project 'openshift.io/user-monitoring=false'", "oc label namespace my-project 'openshift.io/user-monitoring-'", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false", "oc -n openshift-user-workload-monitoring get pod", "No resources found in openshift-user-workload-monitoring project.", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true alertmanagerMain: enableUserAlertmanagerConfig: true 1", "oc -n <namespace> adm policy add-role-to-user alert-routing-edit <user> 1", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true alertmanagerMain: enableUserAlertmanagerConfig: false 1", "curl http://<example_app_endpoint>/metrics", "HELP http_requests_total Count of all HTTP requests TYPE http_requests_total counter http_requests_total{code=\"200\",method=\"get\"} 4 http_requests_total{code=\"404\",method=\"get\"} 2 HELP version Version information about this binary TYPE version gauge version{version=\"v0.1.0\"} 1", "apiVersion: v1 kind: Namespace metadata: name: ns1 --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: replicas: 1 selector: matchLabels: app: prometheus-example-app template: metadata: labels: app: prometheus-example-app spec: containers: - image: ghcr.io/rhobs/prometheus-example-app:0.4.1 imagePullPolicy: IfNotPresent name: prometheus-example-app --- apiVersion: v1 kind: Service metadata: labels: app: prometheus-example-app name: prometheus-example-app namespace: ns1 spec: ports: - port: 8080 protocol: TCP targetPort: 8080 name: web selector: app: prometheus-example-app type: ClusterIP", "oc apply -f prometheus-example-app.yaml", "oc -n ns1 get pod", "NAME READY STATUS RESTARTS AGE prometheus-example-app-7857545cb7-sbgwq 1/1 Running 0 81m", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: k8s-app: prometheus-example-monitor name: prometheus-example-monitor namespace: ns1 spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc apply -f example-app-service-monitor.yaml", "oc -n ns1 get servicemonitor", "NAME AGE prometheus-example-monitor 81m", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 spec: groups: - name: example rules: - alert: VersionAlert expr: version{job=\"prometheus-example-app\"} == 0", "oc apply -f example-app-alerting-rule.yaml", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: example-alert namespace: ns1 labels: openshift.io/prometheus-rule-evaluation-scope: leaf-prometheus spec: groups: - name: example rules: - alert: VersionAlert expr: version{job=\"prometheus-example-app\"} == 0", "oc apply -f example-app-alerting-rule.yaml", "oc -n <project> get prometheusrule", "oc -n <project> get prometheusrule <rule> -o yaml", "oc -n <namespace> delete prometheusrule <foo>", "apiVersion: monitoring.coreos.com/v1alpha1 kind: AlertmanagerConfig metadata: name: example-routing namespace: ns1 spec: route: receiver: default groupBy: [job] receivers: - name: default webhookConfigs: - url: https://example.org/post", "oc apply -f example-app-alert-routing.yaml", "oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data \"alertmanager.yaml\" }}' | base64 --decode > alertmanager.yaml", "global: resolve_timeout: 5m route: group_wait: 30s 1 group_interval: 5m 2 repeat_interval: 12h 3 receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=<your_service>\" 4 routes: - matchers: - <your_matching_rules> 5 receiver: <receiver> 6 receivers: - name: default - name: watchdog - name: <receiver> <receiver_configuration>", "global: resolve_timeout: 5m route: group_wait: 30s group_interval: 5m repeat_interval: 12h receiver: default routes: - matchers: - \"alertname=Watchdog\" repeat_interval: 2m receiver: watchdog - matchers: - \"service=example-app\" routes: - matchers: - \"severity=critical\" receiver: team-frontend-page* receivers: - name: default - name: watchdog - name: team-frontend-page pagerduty_configs: - service_key: \"_your-key_\"", "oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=-", "apiVersion: v1 kind: Namespace metadata: name: openshift-bare-metal-events labels: name: openshift-bare-metal-events openshift.io/cluster-monitoring: \"true\"", "oc create -f bare-metal-events-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: bare-metal-event-relay-group namespace: openshift-bare-metal-events spec: targetNamespaces: - openshift-bare-metal-events", "oc create -f bare-metal-events-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: bare-metal-event-relay-subscription namespace: openshift-bare-metal-events spec: channel: \"stable\" name: bare-metal-event-relay source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f bare-metal-events-sub.yaml", "oc get csv -n openshift-bare-metal-events -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase bare-metal-event-relay.4.10.0-202206301927 Succeeded", "oc get pods -n amq-interconnect", "NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h", "oc get pods -n openshift-bare-metal-events", "NAME READY STATUS RESTARTS AGE hw-event-proxy-operator-controller-manager-74d5649b7c-dzgtl 2/2 Running 0 25s", "curl https://<bmc_ip_address>/redfish/v1/EventService --insecure -H 'Content-Type: application/json' -u \"<bmc_username>:<password>\"", "{ \"@odata.context\": \"/redfish/v1/USDmetadata#EventService.EventService\", \"@odata.id\": \"/redfish/v1/EventService\", \"@odata.type\": \"#EventService.v1_0_2.EventService\", \"Actions\": { \"#EventService.SubmitTestEvent\": { \"[email protected]\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"target\": \"/redfish/v1/EventService/Actions/EventService.SubmitTestEvent\" } }, \"DeliveryRetryAttempts\": 3, \"DeliveryRetryIntervalSeconds\": 30, \"Description\": \"Event Service represents the properties for the service\", \"EventTypesForSubscription\": [\"StatusChange\", \"ResourceUpdated\", \"ResourceAdded\", \"ResourceRemoved\", \"Alert\"], \"[email protected]\": 5, \"Id\": \"EventService\", \"Name\": \"Event Service\", \"ServiceEnabled\": true, \"Status\": { \"Health\": \"OK\", \"HealthRollup\": \"OK\", \"State\": \"Enabled\" }, \"Subscriptions\": { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\" } }", "oc get route -n openshift-bare-metal-events", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hw-event-proxy hw-event-proxy-openshift-bare-metal-events.apps.compute-1.example.com hw-event-proxy-service 9087 edge None", "apiVersion: metal3.io/v1alpha1 kind: BMCEventSubscription metadata: name: sub-01 namespace: openshift-machine-api spec: hostName: <hostname> 1 destination: <proxy_service_url> 2 context: ''", "oc create -f bmc_sub.yaml", "oc delete -f bmc_sub.yaml", "curl -i -k -X POST -H \"Content-Type: application/json\" -d '{\"Destination\": \"https://<proxy_service_url>\", \"Protocol\" : \"Redfish\", \"EventTypes\": [\"Alert\"], \"Context\": \"root\"}' -u <bmc_username>:<password> 'https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions' -v", "HTTP/1.1 201 Created Server: AMI MegaRAC Redfish Service Location: /redfish/v1/EventService/Subscriptions/1 Allow: GET, POST Access-Control-Allow-Origin: * Access-Control-Expose-Headers: X-Auth-Token Access-Control-Allow-Headers: X-Auth-Token Access-Control-Allow-Credentials: true Cache-Control: no-cache, must-revalidate Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json>; rel=describedby Link: <http://redfish.dmtf.org/schemas/v1/EventDestination.v1_6_0.json> Link: </redfish/v1/EventService/Subscriptions>; path= ETag: \"1651135676\" Content-Type: application/json; charset=UTF-8 OData-Version: 4.0 Content-Length: 614 Date: Thu, 28 Apr 2022 08:47:57 GMT", "curl --globoff -H \"Content-Type: application/json\" -k -X GET --user <bmc_username>:<password> https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 435 100 435 0 0 399 0 0:00:01 0:00:01 --:--:-- 399 { \"@odata.context\": \"/redfish/v1/USDmetadata#EventDestinationCollection.EventDestinationCollection\", \"@odata.etag\": \"\" 1651137375 \"\", \"@odata.id\": \"/redfish/v1/EventService/Subscriptions\", \"@odata.type\": \"#EventDestinationCollection.EventDestinationCollection\", \"Description\": \"Collection for Event Subscriptions\", \"Members\": [ { \"@odata.id\": \"/redfish/v1/EventService/Subscriptions/1\" }], \"[email protected]\": 1, \"Name\": \"Event Subscriptions Collection\" }", "curl --globoff -L -w \"%{http_code} %{url_effective}\\n\" -k -u <bmc_username>:<password >-H \"Content-Type: application/json\" -d '{}' -X DELETE https://<bmc_ip_address>/redfish/v1/EventService/Subscriptions/1", "apiVersion: \"event.redhat-cne.org/v1alpha1\" kind: \"HardwareEvent\" metadata: name: \"hardware-event\" spec: nodeSelector: node-role.kubernetes.io/hw-event: \"\" 1 transportHost: \"amqp://amq-router-service-name.amq-namespace.svc.cluster.local\" 2 logLevel: \"debug\" 3 msgParserTimeout: \"10\" 4", "oc create -f hardware-event.yaml", "apiVersion: v1 kind: Secret metadata: name: redfish-basic-auth type: Opaque stringData: 1 username: <bmc_username> password: <bmc_password> # BMC host DNS or IP address hostaddr: <bmc_host_ip_address>", "oc create -f hw-event-bmc-secret.yaml", "[ { \"id\": \"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\": \"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" } ]", "{ \"uriLocation\": \"http://localhost:8089/api/ocloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "{ \"id\":\"ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"endpointUri\":\"http://localhost:9089/api/ocloudNotifications/v1/dummy\", \"uriLocation\":\"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/ca11ab76-86f9-428c-8d3a-666c24e34d32\", \"resource\":\"/cluster/node/openshift-worker-0.openshift.example.com/redfish/event\" }", "{\"status\":\"ping sent\"}", "OK", "host=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath={.spec.host}) token=USD(oc whoami -t) curl -H \"Authorization: Bearer USDtoken\" -k \"https://USDhost/api/v2/receivers\"", "token=`oc whoami -t`", "curl -G -s -k -H \"Authorization: Bearer USDtoken\" 'https:/<federation_host>/federate' \\ 1 --data-urlencode 'match[]=up'", "TYPE up untyped up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.143.148:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035322214 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.148.166:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035338597 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.173.16:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035343834", "oc -n ns1 get service prometheus-example-app -o yaml", "labels: app: prometheus-example-app", "oc -n ns1 get servicemonitor prometheus-example-monitor -o yaml", "spec: endpoints: - interval: 30s port: web scheme: http selector: matchLabels: app: prometheus-example-app", "oc -n openshift-user-workload-monitoring get pods", "NAME READY STATUS RESTARTS AGE prometheus-operator-776fcbbd56-2nbfm 2/2 Running 0 132m prometheus-user-workload-0 5/5 Running 1 132m prometheus-user-workload-1 5/5 Running 1 132m thanos-ruler-user-workload-0 3/3 Running 0 132m thanos-ruler-user-workload-1 3/3 Running 0 132m", "oc -n openshift-user-workload-monitoring logs prometheus-operator-776fcbbd56-2nbfm -c prometheus-operator", "level=warn ts=2020-08-10T11:48:20.906739623Z caller=operator.go:1829 component=prometheusoperator msg=\"skipping servicemonitor\" error=\"it accesses file system via bearer token file which Prometheus specification prohibits\" servicemonitor=eagle/eagle namespace=openshift-user-workload-monitoring prometheus=user-workload", "oc port-forward -n openshift-user-workload-monitoring pod/prometheus-user-workload-0 9090", "oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheusOperator: logLevel: debug", "oc -n openshift-user-workload-monitoring get deploy prometheus-operator -o yaml | grep \"log-level\"", "- --log-level=debug", "oc -n openshift-user-workload-monitoring get pods", "topk(10,count by (job)({__name__=~\".+\"}))" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/monitoring/index
Configuring and using database servers
Configuring and using database servers Red Hat Enterprise Linux 9 Installing, configuring, backing up and migrating data on database servers Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_database_servers/index
Chapter 2. Installing the Ansible plug-ins with a Helm chart on OpenShift Container Platform
Chapter 2. Installing the Ansible plug-ins with a Helm chart on OpenShift Container Platform The following procedures describe how to install Ansible plug-ins in Red Hat Developer Hub instances on Red Hat OpenShift Container Platform using a Helm chart. The workflow is as follows: Download the Ansible plug-ins files. Create a plug-in registry in your OpenShift cluster to host the Ansible plug-ins. Add the plug-ins to the Helm chart. Create a custom ConfigMap. Add your custom ConfigMap to your Helm chart. Edit your custom ConfigMap and Helm chart according to the required and optional configuration procedures. Note You can save changes to your Helm and ConfigMap after each update to your configuration. You do not have to make all the changes to these files in a single session. 2.1. Prerequisites Red Hat Developer Hub installed on Red Hat OpenShift Container Platform. For Helm installation, follow the steps in the Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart section of Installing Red Hat Developer Hub on OpenShift Container Platform . For Operator installation, follow the steps in the Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator section of Installing Red Hat Developer Hub on OpenShift Container Platform . A valid subscription to Red Hat Ansible Automation Platform. An OpenShift Container Platform instance with the appropriate permissions within your project to create an application. The Red Hat Developer Hub instance can query the automation controller API. Optional: To use the integrated learning paths, you must have outbound access to developers.redhat.com. 2.2. Recommended RHDH preconfiguration Red Hat recommends performing the following initial configuration tasks in RHDH. However, you can install the Ansible plug-ins for Red Hat Developer Hub before completing these tasks. Setting up authentication in RHDH Installing and configuring RBAC in RHDH Note Red Hat provides a repository of software templates for RHDH that uses the publish:github action. To use these software templates, you must install the required GitHub dynamic plugins. 2.3. Downloading the Ansible plug-ins files Download the latest .tar file for the plug-ins from the Red Hat Ansible Automation Platform Product Software downloads page . The format of the filename is ansible-backstage-rhaap-bundle-x.y.z.tar.gz . Substitute the Ansible plug-ins release version, for example 1.0.0 , for x.y.z . Create a directory on your local machine to store the .tar files. USD mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme> Set an environment variable ( USDDYNAMIC_PLUGIN_ROOT_DIR ) to represent the directory path. USD export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme> Extract the ansible-backstage-rhaap-bundle-<version-number>.tar.gz contents to USDDYNAMIC_PLUGIN_ROOT_DIR . USD tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C USDDYNAMIC_PLUGIN_ROOT_DIR Substitute the Ansible plug-ins release version, for example 1.0.0 , for x.y.z . Verification Run ls to verify that the extracted files are in the USDDYNAMIC_PLUGIN_ROOT_DIR directory: USD ls USDDYNAMIC_PLUGIN_ROOT_DIR ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity The files with the .integrity file type contain the plugin SHA value. The SHA value is used during the plug-in configuration. 2.4. Creating a registry for the Ansible plug-ins Set up a registry in your OpenShift cluster to host the Ansible plug-ins and make them available for installation in Red Hat Developer Hub (RHDH). Procedure Log in to your OpenShift Container Platform instance with credentials to create a new application. Open your Red Hat Developer Hub OpenShift project. USD oc project <YOUR_DEVELOPER_HUB_PROJECT> Run the following commands to create a plug-in registry build in the OpenShift cluster. USD oc new-build httpd --name=plugin-registry --binary USD oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait USD oc new-app --image-stream=plugin-registry Verification To verify that the plugin-registry was deployed successfully, open the Topology view in the Developer perspective on the Red Hat Developer Hub application in the OpenShift Web console. Click the plug-in registry to view the log. (1) Developer hub instance (2) Plug-in registry Click the terminal tab and login to the container. In the terminal, run ls to confirm that the .tar files are in the plugin registry. ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz The version numbers and file names can differ. 2.5. Required configuration 2.5.1. Adding the Ansible plug-ins configuration In the OpenShift Developer UI, navigate to Helm developer-hub Actions Upgrade Yaml view . Update the Helm chart configuration to add the dynamic plug-ins in the Red Hat Developer Hub instance. Under the plugins section in the YAML file, add the dynamic plug-ins that you want to enable. global: ... plugins: - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-backstage-rhaap plugin> package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-scaffolder-backend-module-backstage-rhaap plugin> package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-backstage-rhaap-backend plugin> package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null In the package sections, replace x.y.z in the plug-in filenames with the correct version numbers for the Ansible plug-ins. For each Ansible plug-in, update the integrity values using the corresponding .integrity file content. Click Upgrade . The developer hub pods restart and the plug-ins are installed. Verification To verify that the plug-ins have been installed, open the install-dynamic-plugin container logs and check that the Ansible plug-ins are visible in Red Hat Developer Hub: Open the Developer perspective for the Red Hat Developer Hub application in the OpenShift Web console. Select the Topology view. Select the Red Hat Developer Hub deployment pod to open an information pane. Select the Resources tab of the information pane. In the Pods section, click View logs to open the Pod details page. In the Pod details page, select the Logs tab. Select install-dynamic-plugins from the drop-down list of containers to view the container log. In the install-dynamic-plugin container logs, search for the Ansible plug-ins. The following example from the log indicates a successful installation for one of the plug-ins: => Successfully installed dynamic plugin http://plugin-registry-1:8080/ansible-plugin-backstage-rhaap-1.1.0.tgz The following image shows the container log in the Pod details page. The version numbers and file names can differ. 2.5.2. Adding the Ansible Development Tools sidecar container After the plug-ins are loaded, add the Ansible Development Container ( ansible-devtools-server ) in the Red Hat Developer Hub pod as a sidecar container. 2.5.2.1. Adding a pull secret to the Red Hat Developer Hub Helm configuration Prerequisite The Ansible Development Container download requires a Red Hat Customer Portal account and Red Hat Service Registry account. Procedure Create a new Red Hat Registry Service account , if required. Click the token name under the Account name column. Select the OpenShift Secret tab and follow the instructions to add the pull secret to your Red Hat Developer Hub OpenShift project. Add the new secret to the Red Hat Developer Hub Helm configuration, replacing <your-redhat-registry-pull-secret> with the name of the secret you generated on the Red Hat Registry Service Account website: upstream: backstage: ... image: ... pullSecrets: - <your-redhat-registry-pull-secret> ... For more information, refer to the Red Hat Container Registry documentation . 2.5.2.2. Adding the Ansible Developer Tools container You must update the Helm chart configuration to add an extra container. Procedure Log in to the OpenShift UI. Navigate to Helm developer-hub Actions upgrade Yaml view to open the Helm chart. Update the extraContainers section in the YAML file. Add the following code: upstream: backstage: ... extraContainers: - command: - adt - server image: >- registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest imagePullPolicy: IfNotPresent name: ansible-devtools-server ports: - containerPort: 8000 ... Note The image pull policy is imagePullPolicy: IfNotPresent . The image is pulled only if it does not already exist on the node. Update it to imagePullPolicy: Always if you always want to use the latest image. Click Upgrade . Verification To verify that the container is running, check the container log: 2.5.3. Adding a custom ConfigMap Create a Red Hat Developer Hub ConfigMap following the procedure in Adding a custom application configuration file to Red Hat OpenShift Container Platform in the Administration guide for Red Hat Developer Hub . The examples below use a custom ConfigMap named app-config-rhdh To edit your custom ConfigMap, log in to the OpenShift UI and navigate to Select Project ( developerHubProj ) ConfigMaps {developer-hub}-app-config EditConfigMaps app-config-rhdh . 2.5.4. Configuring the Ansible Dev Tools Server The creatorService URL is required for the Ansible plug-ins to provision new projects using the provided software templates. Procedure Edit your custom Red Hat Developer Hub config map, app-config-rhdh , that you created in Adding a custom ConfigMap . Add the following code to your Red Hat Developer Hub app-config-rhdh.yaml file. kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh ... data: app-config-rhdh.yaml: |- ansible: creatorService: baseUrl: 127.0.0.1 port: '8000' ... 2.5.5. Configuring Ansible Automation Platform details The Ansible plug-ins query your Ansible Automation Platform subscription status with the controller API using a token. Note The Ansible plug-ins continue to function regardless of the Ansible Automation Platform subscription status. Procedure Create a Personal Access Token (PAT) with "Read" scope in automation controller, following the Adding tokens section of the Automation controller user guide . Edit your custom Red Hat Developer Hub config map, for example app-config-rhdh . Add your Ansible Automation Platform details to app-config-rhdh.yaml . Set the baseURL key with your automation controller URL. Set the token key with the generated token value that you created in Step 1. Set the checkSSL key to true or false . If checkSSL is set to true , the Ansible plug-ins verify whether the SSL certificate is valid. data: app-config-rhdh.yaml: | ... ansible: ... rhaap: baseUrl: '<https://MyControllerUrl>' token: '<AAP Personal Access Token>' checkSSL: true Note You are responsible for protecting your Red Hat Developer Hub installation from external and unauthorized access. Manage the backend authentication key like any other secret. Meet strong password requirements, do not expose it in any configuration files, and only inject it into configuration files as an environment variable. 2.5.6. Adding Ansible plug-ins software templates Red Hat Ansible provides software templates for Red Hat Developer Hub to provision new playbooks and collection projects based on Ansible best practices. Procedure Edit your custom Red Hat Developer Hub config map, for example app-config-rhdh . Add the following code to your Red Hat Developer Hub app-config-rhdh.yaml file. data: app-config-rhdh.yaml: | catalog: ... locations: ... - type: url target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml rules: - allow: [Template] For more information, refer to the Managing templates section of the Administration guide for Red Hat Developer Hub . 2.5.7. Configuring Role Based Access Control Red Hat Developer Hub offers Role-based Access Control (RBAC) functionality. RBAC can then be applied to the Ansible plug-ins content. Assign the following roles: Members of the admin:superUsers group can select templates in the Create tab of the Ansible plug-ins to create playbook and collection projects. Members of the admin:users group can view templates in the Create tab of the Ansible plug-ins. The following example adds RBAC to Red Hat Developer Hub. data: app-config-rhdh.yaml: | plugins: ... permission: enabled: true rbac: admin: users: - name: user:default/<user-scm-ida> superUsers: - name: user:default/<user-admin-idb> For more information about permission policies and managing RBAC, refer to the Authorization guide for Red Hat Developer Hub. 2.6. Optional configuration for Ansible plug-ins 2.6.1. Enabling Red Hat Developer Hub authentication Red Hat Developer Hub (RHDH) provides integrations for multiple Source Control Management (SCM) systems. This is required by the plug-ins to create repositories. Refer to the Enabling authentication in Red Hat Developer Hub chapter of the Administration guide for Red Hat Developer Hub . 2.6.2. Configuring Ansible plug-ins optional integrations The Ansible plug-ins provide integrations with Ansible Automation Platform and other optional Red Hat products. To edit your custom ConfigMap, log in to the OpenShift UI and navigate to Select Project ( developerHubProj ) ConfigMaps {developer-hub}-app-config-rhdh app-config-rhdh . 2.6.2.1. Configuring OpenShift Dev Spaces When OpenShift Dev Spaces is configured for the Ansible plug-ins, users can click a link from the catalog item view in Red Hat Developer Hub and edit their provisioned Ansible Git projects using Dev Spaces. Note OpenShift Dev Spaces is a separate product and it is optional. The plug-ins will function without it. It is a separate Red Hat product and is not included in the Ansible Automation Platform or Red Hat Developer Hub subscription. If the OpenShift Dev Spaces link is not configured in the Ansible plug-ins, the Go to OpenShift Dev Spaces dashboard link in the DEVELOP section of the Ansible plug-ins landing page redirects users to the Ansible development tools home page . Prerequisites A Dev Spaces installation. Refer to the Installing Dev Spaces section of the Red Hat OpenShift Dev Spaces Administration guide . Procedure Edit your custom Red Hat Developer Hub config map, for example app-config-rhdh . Add the following code to your Red Hat Developer Hub app-config-rhdh.yaml file. data: app-config-rhdh.yaml: |- ansible: devSpaces: baseUrl: >- https://<Your OpenShift Dev Spaces URL> Replace <Your OpenShft Dev Spaces URL> with your OpenShift Dev Spaces URL. In the OpenShift Developer UI, select the Red Hat Developer Hub pod. Open Actions . Click Restart rollout . 2.6.2.2. Configuring the private automation hub URL Private automation hub provides a centralized, on-premise repository for certified Ansible collections, execution environments and any additional, vetted content provided by your organization. If the private automation hub URL is not configured in the Ansible plug-ins, users are redirected to the Red Hat Hybrid Cloud Console automation hub . Note The private automation hub configuration is optional but recommended. The Ansible plug-ins will function without it. Prerequisites: A private automation hub instance. For more information on installing private automation hub, refer to the Installation and Upgrade guides in the Ansible Automation Platform documentation. Procedure: Edit your custom Red Hat Developer Hub config map, fpr example app-config-rhdh . Add the following code to your Red Hat Developer Hub app-config-rhdh.yaml file. data: app-config-rhdh.yaml: |- ansible: ... automationHub: baseUrl: '<https://MyOwnPAHUrl>' ... Replace <https://MyOwnPAHUrl/> with your private automation hub URL. In the OpenShift Developer UI, select the Red Hat Developer Hub pod. Open Actions . Click Restart rollout . 2.7. Full examples 2.7.1. Full app-config-rhdh ConfigMap example for Ansible plug-ins entries kind: ConfigMap ... metadata: name: app-config-rhdh ... data: app-config-rhdh.yaml: |- ansible: creatorService: baseUrl: 127.0.0.1 port: '8000' rhaap: baseUrl: '<https://MyControllerUrl>' token: '<AAP Personal Access Token>' checkSSL: <true or false> # Optional integrations devSpaces: baseUrl: '<https://MyDevSpacesURL>' automationHub: baseUrl: '<https://MyPrivateAutomationHubURL>' ... catalog: locations: - type: url target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml rules: - allow: [Template] ... 2.7.2. Full Helm chart config example for Ansible plug-ins global: ... dynamic: ... plugins: - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-backstage-rhaap plugin> package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-scaffolder-backend-module-backstage-rhaap plugin> package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-backstage-rhaap-backend plugin> package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null ... upstream: backstage: ... extraAppConfig: - configMapRef: app-config-rhdh filename: app-config-rhdh.yaml extraContainers: - command: - adt - server image: >- registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest imagePullPolicy: IfNotPresent name: ansible-devtools-server ports: - containerPort: 8000 ...
[ "mkdir /path/to/<ansible-backstage-plugins-local-dir-changeme>", "export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/<ansible-backstage-plugins-local-dir-changeme>", "tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C USDDYNAMIC_PLUGIN_ROOT_DIR", "ls USDDYNAMIC_PLUGIN_ROOT_DIR ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-x.y.z.tgz.integrity ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz.integrity ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz.integrity", "oc project <YOUR_DEVELOPER_HUB_PROJECT>", "oc new-build httpd --name=plugin-registry --binary oc start-build plugin-registry --from-dir=USDDYNAMIC_PLUGIN_ROOT_DIR --wait oc new-app --image-stream=plugin-registry", "ansible-plugin-backstage-rhaap-x.y.z.tgz ansible-plugin-backstage-rhaap-backend-x.y.z.tgz ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz", "global: plugins: - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-backstage-rhaap plugin> package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-scaffolder-backend-module-backstage-rhaap plugin> package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-backstage-rhaap-backend plugin> package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null", "=> Successfully installed dynamic plugin http://plugin-registry-1:8080/ansible-plugin-backstage-rhaap-1.1.0.tgz", "upstream: backstage: image: pullSecrets: - <your-redhat-registry-pull-secret>", "upstream: backstage: extraContainers: - command: - adt - server image: >- registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest imagePullPolicy: IfNotPresent name: ansible-devtools-server ports: - containerPort: 8000", "kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: |- ansible: creatorService: baseUrl: 127.0.0.1 port: '8000'", "data: app-config-rhdh.yaml: | ansible: rhaap: baseUrl: '<https://MyControllerUrl>' token: '<AAP Personal Access Token>' checkSSL: true", "data: app-config-rhdh.yaml: | catalog: locations: - type: url target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml rules: - allow: [Template]", "data: app-config-rhdh.yaml: | plugins: permission: enabled: true rbac: admin: users: - name: user:default/<user-scm-ida> superUsers: - name: user:default/<user-admin-idb>", "data: app-config-rhdh.yaml: |- ansible: devSpaces: baseUrl: >- https://<Your OpenShift Dev Spaces URL>", "data: app-config-rhdh.yaml: |- ansible: automationHub: baseUrl: '<https://MyOwnPAHUrl>'", "kind: ConfigMap metadata: name: app-config-rhdh data: app-config-rhdh.yaml: |- ansible: creatorService: baseUrl: 127.0.0.1 port: '8000' rhaap: baseUrl: '<https://MyControllerUrl>' token: '<AAP Personal Access Token>' checkSSL: <true or false> # Optional integrations devSpaces: baseUrl: '<https://MyDevSpacesURL>' automationHub: baseUrl: '<https://MyPrivateAutomationHubURL>' catalog: locations: - type: url target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml rules: - allow: [Template]", "global: dynamic: plugins: - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-backstage-rhaap plugin> package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-x.y.z.tgz' pluginConfig: dynamicPlugins: frontend: ansible.plugin-backstage-rhaap: appIcons: - importName: AnsibleLogo name: AnsibleLogo dynamicRoutes: - importName: AnsiblePage menuItem: icon: AnsibleLogo text: Ansible path: /ansible - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-scaffolder-backend-module-backstage-rhaap plugin> package: >- http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-scaffolder-backend-module-backstage-rhaap: null - disabled: false integrity: <SHA512 Integrity key for ansible-plugin-backstage-rhaap-backend plugin> package: >- http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-x.y.z.tgz pluginConfig: dynamicPlugins: backend: ansible.plugin-backstage-rhaap-backend: null upstream: backstage: extraAppConfig: - configMapRef: app-config-rhdh filename: app-config-rhdh.yaml extraContainers: - command: - adt - server image: >- registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest imagePullPolicy: IfNotPresent name: ansible-devtools-server ports: - containerPort: 8000" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_ansible_plug-ins_for_red_hat_developer_hub/rhdh-install-ocp-helm_aap-plugin-rhdh-installing
Chapter 6. Network Policy
Chapter 6. Network Policy As a user with the admin role, you can create a network policy for the netobserv namespace. 6.1. Creating a network policy for Network Observability You might need to create a network policy to secure ingress traffic to the netobserv namespace. In the web console, you can create a network policy using the form view. Procedure Navigate to Networking NetworkPolicies . Select the netobserv project from the Project dropdown menu. Name the policy. For this example, the policy name is allow-ingress . Click Add ingress rule three times to create three ingress rules. Specify the following in the form: Make the following specifications for the first Ingress rule : From the Add allowed source dropdown menu, select Allow pods from the same namespace . Make the following specifications for the second Ingress rule : From the Add allowed source dropdown menu, select Allow pods from inside the cluster . Click + Add namespace selector . Add the label, kubernetes.io/metadata.name , and the selector, openshift-console . Make the following specifications for the third Ingress rule : From the Add allowed source dropdown menu, select Allow pods from inside the cluster . Click + Add namespace selector . Add the label, kubernetes.io/metadata.name , and the selector, openshift-monitoring . Verification Navigate to Observe Network Traffic . View the Traffic Flows tab, or any tab, to verify that the data is displayed. Navigate to Observe Dashboards . In the NetObserv/Health selection, verify that the flows are being ingested and sent to Loki, which is represented in the first graph. 6.2. Example network policy The following annotates an example NetworkPolicy object for the netobserv namespace: Sample network policy kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-ingress namespace: netobserv spec: podSelector: {} 1 ingress: - from: - podSelector: {} 2 namespaceSelector: 3 matchLabels: kubernetes.io/metadata.name: openshift-console - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring policyTypes: - Ingress status: {} 1 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. In this documentation, it would be the project in which the Network Observability Operator is installed, which is the netobserv project. 2 A selector that matches the pods from which the policy object allows ingress traffic. The default is that the selector matches pods in the same namespace as the NetworkPolicy . 3 When the namespaceSelector is specified, the selector matches pods in the specified namespace. Additional resources Creating a network policy using the CLI
[ "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-ingress namespace: netobserv spec: podSelector: {} 1 ingress: - from: - podSelector: {} 2 namespaceSelector: 3 matchLabels: kubernetes.io/metadata.name: openshift-console - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-monitoring policyTypes: - Ingress status: {}" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/network_observability/network-observability-network-policy
Chapter 1. New features and enhancements
Chapter 1. New features and enhancements Red Hat JBoss Core Services (JBCS) 2.4.57 Service Pack 2 includes the following new features and enhancements. 1.1. JBCS support for Apache HTTP Server 2.4.57 on RHEL 9 From the 2.4.57 Service Pack 2 release onward, JBCS also provides an archive file distribution of the Apache HTTP Server 2.4.57 for Red Hat Enterprise Linux (RHEL) 9 systems. Important Support is available for installing JBCS on RHEL 9 from an archive file only. JBCS does not provide an RPM distribution of the Apache HTTP Server 2.4.57 for RHEL 9 systems. If you want to install the Apache HTTP Server from RPM packages on RHEL 9, you can use the Application Streams feature of RHEL. For more information about the different installation options, see the Red Hat JBoss Core Services Apache HTTP Server Installation Guide . Note The base archive file for installing the JBCS Apache HTTP Server 2.4.57 on RHEL 9 is named Red Hat JBoss Core Services Apache HTTP Server 2.4.57 Patch 02 for RHEL 9 x86_64 . 1.2. JBCS support for MDExternalAccountBinding JBCS 2.4.57 Service Pack 2 introduces support for the MDExternalAccountBinding directive. This directive enables you to configure values for Automated Certificate Management Environment (ACME) external account binding, which allows clients to bind registrations to an existing customer account on ACME servers. For more information, see MDExternalAccountBinding Directive .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_2_release_notes/new_features_and_enhancements
Chapter 4. RHEL 8.2.1 release
Chapter 4. RHEL 8.2.1 release Red Hat makes Red Hat Enterprise Linux 8 content available quarterly, in between minor releases (8.Y). The quarterly releases are numbered using the third digit (8.Y.1). The new features in the RHEL 8.2.1 release are described below. 4.1. New features JDK Mission Control rebased to version 7.1.1 The JDK Mission Control (JMC) profiler for HotSpot JVMs, provided by the jmc:rhel8 module stream, has been upgraded to version 7.1.1 with the RHEL 8.2.1 release. This update includes numerous bug fixes and enhancements, including: Multiple rule optimizations A new JOverflow view based on Standard Widget Toolkit (SWT) A new flame graph view A new way of latency visualization using the High Dynamic Range (HDR) Histogram The jmc:rhel8 module stream has two profiles: The common profile, which installs the entire JMC application The core profile, which installs only the core Java libraries ( jmc-core ) To install the common profile of the jmc:rhel8 module stream, use: Change the profile name to core to install only the jmc-core package. (BZ#1792519) Rust Toolset rebased to version 1.43 Rust Toolset has been updated to version 1.43. Notable changes include: Useful line numbers are now included in Option and Result panic messages where they were invoked. Expanded support for matching on subslice patterns. The matches! macro provides pattern matching that returns a boolean value. item fragments can be interpolated into traits, impls, and extern blocks. Improved type inference around primitives. Associated constants for floats and integers. To install the Rust Toolset module, run the following command as root : For usage information, see the Using Rust Toolset documentation. (BZ#1811997) Containers registries now support the skopeo sync command With this enhancement, users can use skopeo sync command to synchronize container registries and local registries. The skopeo sync command is useful to synchronize a local container registry mirror, and to populate registries running inside of air-gapped environments. The skopeo sync command requires both source ( --src ) and destination ( --dst ) transports to be specified separately. Available source and destination transports are docker (repository hosted on a container registry) and dir ( directory in a local directory path). The source transports also include yaml (local YAML file path). For information on the usage of skopeo sync , see the skopeo-sync man page. (BZ#1811779) Configuration file container.conf is now available With this enhancement, users and administrators can specify default configuration options and command-line flags for container engines. Container engines read the /usr/share/containers/containers.conf and /etc/containers/containers.conf files if they exist. In the rootless mode, container engines read the USDHOME/.config/containers/containers.conf files. Fields specified in the containers.conf file override the default options, as well as options in previously read containers.conf files. The container.conf file is shared between Podman and Buildah and replaces the libpod.conf file. (BZ#11826486) You can now log into and out from a registry server With this enhancement, you can log into and logout from a specified registry server using the skopeo login and skopeo logout commands. The skopeo login command reads in the username and password from standard input. The username and password can also be set using the --username (or -u ) and --password (or -p ) options. You can specify the path of the authentication file by setting the --authfile flag. The default path is USD{XDG_RUNTIME_DIR}/containers/auth.json . For information on the usage of skopeo login and skopeo logout , see the skopeo-login and skopeo-logout man pages, respectively. (JIRA:RHELPLAN-47311) You can now reset the podman storage With this enhancement, users can use the podman system reset command to reset podman storage back to initial state. The podman system reset command removes all pods, containers, images and volumes. For more information, see the podman-system-reset man page. (JIRA:RHELPLAN-48941)
[ "yum module install jmc:rhel8/common", "yum module install rust-toolset" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.2_release_notes/rhel-8_2_1_release
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/service_binding/making-open-source-more-inclusive
Chapter 46. Netty
Chapter 46. Netty Both producer and consumer are supported The Netty component in Camel is a socket communication component, based on the Netty project version 4. Netty is a NIO client server framework which enables quick and easy development of networkServerInitializerFactory applications such as protocol servers and clients. Netty greatly simplifies and streamlines network programming such as TCP and UDP socket server. This camel component supports both producer and consumer endpoints. The Netty component has several options and allows fine-grained control of a number of TCP/UDP communication parameters (buffer sizes, keepAlives, tcpNoDelay, etc) and facilitates both In-Only and In-Out communication on a Camel route. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> 46.1. URI format The URI scheme for a netty component is as follows This component supports producer and consumer endpoints for both TCP and UDP. 46.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 46.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file ( application.properties|yaml ), or directly with Java code. 46.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 46.3. Component Options The Netty component supports 73 options, which are listed below. Name Description Default Type configuration (common) To use the NettyConfiguration as configuration when creating endpoints. NettyConfiguration disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP. false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 int backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean executorService (consumer (advanced)) To use the given EventExecutorGroup. EventExecutorGroup maximumPoolSize (consumer (advanced)) Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. int nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory networkInterface (consumer (advanced)) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory correlationManager (producer (advanced)) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelStateCorrelationManager lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer (advanced)) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer (advanced)) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values: LINE NULL LINE TextLineDelimiter encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false boolean enabledProtocols (security) Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 46.4. Endpoint Options The Netty endpoint is configured using URI syntax: with the following path and query parameters: 46.4.1. Path Parameters (3 parameters) Name Description Default Type protocol (common) Required The protocol to use which can be tcp or udp. Enum values: tcp udp String host (common) Required The hostname. For the consumer the hostname is localhost or 0.0.0.0. For the producer the hostname is the remote host to connect to. String port (common) Required The host port number. int 46.4.2. Query Parameters (71 parameters) Name Description Default Type disconnect (common) Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false boolean keepAlive (common) Setting to ensure socket is not closed due to inactivity. true boolean reuseAddress (common) Setting to facilitate socket multiplexing. true boolean reuseChannel (common) This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false boolean sync (common) Setting to set endpoint as one-way or request-response. true boolean tcpNoDelay (common) Setting to improve TCP protocol performance. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean broadcast (consumer) Setting to choose Multicast over UDP. false boolean clientMode (consumer) If the clientMode is true, netty consumer will connect the address as a TCP client. false boolean reconnect (consumer) Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true boolean reconnectInterval (consumer) Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 int backlog (consumer (advanced)) Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. int bossCount (consumer (advanced)) When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 int bossGroup (consumer (advanced)) Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. EventLoopGroup disconnectOnNoReply (consumer (advanced)) If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern nettyServerBootstrapFactory (consumer (advanced)) To use a custom NettyServerBootstrapFactory. NettyServerBootstrapFactory networkInterface (consumer (advanced)) When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String noReplyLogLevel (consumer (advanced)) If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverClosedChannelExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. Enum values: TRACE DEBUG INFO WARN ERROR OFF DEBUG LoggingLevel serverExceptionCaughtLogLevel (consumer (advanced)) If the server (NettyConsumer) catches an exception then its logged using this logging level. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel serverInitializerFactory (consumer (advanced)) To use a custom ServerInitializerFactory. ServerInitializerFactory usingExecutorService (consumer (advanced)) Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true boolean connectTimeout (producer) Time to wait for a socket connection to be available. Value is in milliseconds. 10000 int lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean requestTimeout (producer) Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. long clientInitializerFactory (producer (advanced)) To use a custom ClientInitializerFactory. ClientInitializerFactory correlationManager (producer (advanced)) To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. NettyCamelStateCorrelationManager lazyChannelCreation (producer (advanced)) Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true boolean producerPoolEnabled (producer (advanced)) Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true boolean producerPoolMaxIdle (producer (advanced)) Sets the cap on the number of idle instances in the pool. 100 int producerPoolMaxTotal (producer (advanced)) Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 int producerPoolMinEvictableIdle (producer (advanced)) Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 long producerPoolMinIdle (producer (advanced)) Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. int udpConnectionlessSending (producer (advanced)) This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false boolean useByteBuf (producer (advanced)) If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false boolean hostnameVerification ( security) To enable/disable hostname verification on SSLEngine. false boolean allowSerializedHeaders (advanced) Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false boolean channelGroup (advanced) To use a explicit ChannelGroup. ChannelGroup nativeTransport (advanced) Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false boolean options (advanced) Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map receiveBufferSize (advanced) The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 int receiveBufferSizePredictor (advanced) Configures the buffer size predictor. See details at Jetty documentation and this mail thread. int sendBufferSize (advanced) The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 int synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferExchange (advanced) Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean udpByteArrayCodec (advanced) For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false boolean workerCount (advanced) When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. int workerGroup (advanced) To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. EventLoopGroup allowDefaultCodec (codec) The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true boolean autoAppendDelimiter (codec) Whether or not to auto append missing end delimiter when sending using the textline codec. true boolean decoderMaxLineLength (codec) The max line length to use for the textline codec. 1024 int decoders (codec) A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List delimiter (codec) The delimiter to use for the textline codec. Possible values are LINE and NULL. Enum values: LINE NULL LINE TextLineDelimiter encoders (codec) A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. List encoding (codec) The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String textline (codec) Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false boolean enabledProtocols (security) Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String keyStoreFile (security) Client side certificate keystore to be used for encryption. File keyStoreFormat (security) Keystore format to be used for payload encryption. Defaults to JKS if not set. String keyStoreResource (security) Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String needClientAuth (security) Configures whether the server needs client authentication when using SSL. false boolean passphrase (security) Password setting to use in order to encrypt/decrypt payloads sent using SSH. String securityProvider (security) Security provider to be used for payload encryption. Defaults to SunX509 if not set. String ssl (security) Setting to specify whether SSL encryption is applied to this endpoint. false boolean sslClientCertHeaders (security) When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false boolean sslContextParameters (security) To configure security using SSLContextParameters. SSLContextParameters sslHandler (security) Reference to a class that could be used to return an SSL Handler. SslHandler trustStoreFile (security) Server side certificate keystore to be used for encryption. File trustStoreResource (security) Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String 46.5. Registry based Options Codec Handlers and SSL Keystores can be enlisted in the Registry, such as in the Spring XML file. The values that could be passed in, are the following: Name Description passphrase password setting to use in order to encrypt/decrypt payloads sent using SSH keyStoreFormat keystore format to be used for payload encryption. Defaults to "JKS" if not set securityProvider Security provider to be used for payload encryption. Defaults to "SunX509" if not set. keyStoreFile deprecated: Client side certificate keystore to be used for encryption trustStoreFile deprecated: Server side certificate keystore to be used for encryption keyStoreResource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with "classpath:" , "file:" , or "http:" to load the resource from different systems. trustStoreResource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with "classpath:" , "file:" , or "http:" to load the resource from different systems. sslHandler Reference to a class that could be used to return an SSL Handler encoder A custom ChannelHandler class that can be used to perform special marshalling of outbound payloads. Must override io.netty.channel.ChannelInboundHandlerAdapter. encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. decoder A custom ChannelHandler class that can be used to perform special marshalling of inbound payloads. Must override io.netty.channel.ChannelOutboundHandlerAdapter. decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. Note Read below about using non shareable encoders/decoders. 46.5.1. Using non shareable encoders or decoders If your encoders or decoders are not shareable (e.g. they don't have the @Shareable class annotation), then your encoder/decoder must implement the org.apache.camel.component.netty.ChannelHandlerFactory interface, and return a new instance in the newChannelHandler method. This is to ensure the encoder/decoder can safely be used. If this is not the case, then the Netty component will log a WARN when an endpoint is created. The Netty component offers a org.apache.camel.component.netty.ChannelHandlerFactories factory class, that has a number of commonly used methods. 46.6. Sending Messages to/from a Netty endpoint 46.6.1. Netty Producer In Producer mode, the component provides the ability to send payloads to a socket endpoint using either TCP or UDP protocols (with optional SSL support). The producer mode supports both one-way and request-response based operations. 46.6.2. Netty Consumer In Consumer mode, the component provides the ability to: listen on a specified socket using either TCP or UDP protocols (with optional SSL support), receive requests on the socket using text/xml, binary and serialized object based payloads and send them along on a route as message exchanges. The consumer mode supports both one-way and request-response based operations. 46.7. Examples 46.7.1. A UDP Netty endpoint using Request-Reply and serialized object payload Note that Object serialization is not allowed by default, and so a decoder must be configured. @BindToRegistry("decoder") public ChannelHandler getDecoder() throws Exception { return new DefaultChannelHandlerFactory() { @Override public ChannelHandler newChannelHandler() { return new DatagramPacketObjectDecoder(ClassResolvers.weakCachingResolver(null)); } }; } RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty:udp://0.0.0.0:5155?sync=true&decoders=#decoder") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); // Process poetry in some way exchange.getOut().setBody("Message received); } } } }; 46.7.2. A TCP based Netty consumer endpoint using One-way communication RouteBuilder builder = new RouteBuilder() { public void configure() { from("netty:tcp://0.0.0.0:5150") .to("mock:result"); } }; 46.7.3. An SSL/TCP based Netty consumer endpoint using Request-Reply communication Using the JSSE Configuration Utility The Netty component supports SSL/TLS configuration through the Camel JSSE Configuration Utility . This utility greatly decreases the amount of component specific code you need to write and is configurable at the endpoint and component levels. The following examples demonstrate how to use the utility with the Netty component. Programmatic configuration of the component KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource("/users/home/server/keystore.jks"); ksp.setPassword("keystorePassword"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword("keyPassword"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent("netty", NettyComponent.class); nettyComponent.setSslContextParameters(scp); Spring DSL based configuration of endpoint ... <camel:sslContextParameters id="sslContextParameters"> <camel:keyManagers keyPassword="keyPassword"> <camel:keyStore resource="/users/home/server/keystore.jks" password="keystorePassword"/> </camel:keyManagers> </camel:sslContextParameters>... ... <to uri="netty:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters"/> ... Using Basic SSL/TLS configuration on the Jetty Component Registry registry = context.getRegistry(); registry.bind("password", "changeit"); registry.bind("ksf", new File("src/test/resources/keystore.jks")); registry.bind("tsf", new File("src/test/resources/keystore.jks")); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password" + "&keyStoreFile=#ksf&trustStoreFile=#tsf"; String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } }); Getting access to SSLSession and the client certificate You can get access to the javax.net.ssl.SSLSession if you eg need to get details about the client certificate. When ssl=true then the Netty component will store the SSLSession as a header on the Camel Message as shown below: SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN(); Remember to set needClientAuth=true to authenticate the client, otherwise SSLSession cannot access information about the client certificate, and you may get an exception javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated . You may also get this exception if the client certificate is expired or not valid etc. Note The option sslClientCertHeaders can be set to true which then enriches the Camel Message with headers having details about the client certificate. For example the subject name is readily available in the header CamelNettySSLClientCertSubjectName . 46.7.4. Using Multiple Codecs In certain cases it may be necessary to add chains of encoders and decoders to the netty pipeline. To add multpile codecs to a camel netty endpoint the 'encoders' and 'decoders' uri parameters should be used. Like the 'encoder' and 'decoder' parameters they are used to supply references (lists of ChannelUpstreamHandlers and ChannelDownstreamHandlers) that should be added to the pipeline. Note that if encoders is specified then the encoder param will be ignored, similarly for decoders and the decoder param. Note Read further above about using non shareable encoders/decoders. The lists of codecs need to be added to the Camel's registry so they can be resolved when the endpoint is created. ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind("length-decoder", lengthDecoder); registry.bind("string-decoder", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind("length-encoder", lengthEncoder); registry.bind("string-encoder", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind("encoders", encoders); registry.bind("decoders", decoders); Spring's native collections support can be used to specify the codec lists in an application context <util:list id="decoders" list-class="java.util.LinkedList"> <bean class="org.apache.camel.component.netty.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringDecoder"/> </util:list> <util:list id="encoders" list-class="java.util.LinkedList"> <bean class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean class="io.netty.handler.codec.string.StringEncoder"/> </util:list> <bean id="length-encoder" class="io.netty.handler.codec.LengthFieldPrepender"> <constructor-arg value="4"/> </bean> <bean id="string-encoder" class="io.netty.handler.codec.string.StringEncoder"/> <bean id="length-decoder" class="org.apache.camel.component.netty.ChannelHandlerFactories" factory-method="newLengthFieldBasedFrameDecoder"> <constructor-arg value="1048576"/> <constructor-arg value="0"/> <constructor-arg value="4"/> <constructor-arg value="0"/> <constructor-arg value="4"/> </bean> <bean id="string-decoder" class="io.netty.handler.codec.string.StringDecoder"/> The bean names can then be used in netty endpoint definitions either as a comma separated list or contained in a List e.g. from("direct:multiple-codec").to("netty:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false"); from("netty:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false").to("mock:multiple-codec"); or via XML. <camelContext id="multiple-netty-codecs-context" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:multiple-codec"/> <to uri="netty:tcp://0.0.0.0:5150?encoders=#encoders&amp;sync=false"/> </route> <route> <from uri="netty:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&amp;sync=false"/> <to uri="mock:multiple-codec"/> </route> </camelContext> 46.8. Closing Channel When Complete When acting as a server you sometimes want to close the channel when, for example, a client conversion is finished. You can do this by simply setting the endpoint option disconnect=true . However you can also instruct Camel on a per message basis as follows. To instruct Camel to close the channel, you should add a header with the key CamelNettyCloseChannelWhenComplete set to a boolean true value. For instance, the example below will close the channel after it has written the bye message back to the client: from("netty:tcp://0.0.0.0:8080").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody("Bye " + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } }); Adding custom channel pipeline factories to gain complete control over a created pipeline. 46.9. Custom pipeline Custom channel pipelines provide complete control to the user over the handler/interceptor chain by inserting custom handler(s), encoder(s) & decoder(s) without having to specify them in the Netty Endpoint URL in a very simple way. In order to add a custom pipeline, a custom channel pipeline factory must be created and registered with the context via the context registry (Registry, or the camel-spring ApplicationContextRegistry etc). A custom pipeline factory must be constructed as follows A Producer linked channel pipeline factory must extend the abstract class ClientPipelineFactory . A Consumer linked channel pipeline factory must extend the abstract class ServerInitializerFactory . The classes should override the initChannel() method in order to insert custom handler(s), encoder(s) and decoder(s). Not overriding the initChannel() method creates a pipeline with no handlers, encoders or decoders wired to the pipeline. The example below shows how ServerInitializerFactory factory may be created 46.9.1. Using custom pipeline factory public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; protected void initChannel(Channel ch) throws Exception { ChannelPipeline channelPipeline = ch.pipeline(); channelPipeline.addLast("encoder-SD", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast("decoder-DELIM", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast("decoder-SD", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast("handler", new ServerChannelHandler(consumer)); } } The custom channel pipeline factory can then be added to the registry and instantiated/utilized on a camel route in the following way Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(); registry.bind("spf", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = "netty:tcp://0.0.0.0:5150?serverInitializerFactory=#spf" String return_string = "When You Go Home, Tell Them Of Us And Say," + "For Your Tomorrow, We Gave Our Today."; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } }); 46.10. Reusing Netty boss and worker thread pools Netty has two kind of thread pools: boss and worker. By default each Netty consumer and producer has their private thread pools. If you want to reuse these thread pools among multiple consumers or producers then the thread pools must be created and enlisted in the Registry. For example using Spring XML we can create a shared worker thread pool using the NettyWorkerPoolBuilder with 2 worker threads as shown below: <!-- use the worker pool builder to help create the shared thread pool --> <bean id="poolBuilder" class="org.apache.camel.component.netty.NettyWorkerPoolBuilder"> <property name="workerCount" value="2"/> </bean> <!-- the shared worker thread pool --> <bean id="sharedPool" class="org.jboss.netty.channel.socket.nio.WorkerPool" factory-bean="poolBuilder" factory-method="build" destroy-method="shutdown"> </bean> Note For boss thread pool there is a org.apache.camel.component.netty.NettyServerBossPoolBuilder builder for Netty consumers, and a org.apache.camel.component.netty.NettyClientBossPoolBuilder for the Netty producers. Then in the Camel routes we can refer to this worker pools by configuring the workerPool option in the URI as shown below: <route> <from uri="netty:tcp://0.0.0.0:5021?textline=true&amp;sync=true&amp;workerPool=#sharedPool&amp;usingExecutorService=false"/> <to uri="log:result"/> ... </route> And if we have another route we can refer to the shared worker pool: <route> <from uri="netty:tcp://0.0.0.0:5022?textline=true&amp;sync=true&amp;workerPool=#sharedPool&amp;usingExecutorService=false"/> <to uri="log:result"/> ... </route> and so forth. 46.11. Multiplexing concurrent messages over a single connection with request/reply When using Netty for request/reply messaging via the netty producer then by default each message is sent via a non-shared connection (pooled). This ensures that replies are automatic being able to map to the correct request thread for further routing in Camel. In other words correlation between request/reply messages happens out-of-the-box because the replies comes back on the same connection that was used for sending the request; and this connection is not shared with others. When the response comes back, the connection is returned back to the connection pool, where it can be reused by others. However if you want to multiplex concurrent request/responses on a single shared connection, then you need to turn off the connection pooling by setting producerPoolEnabled=false . Now this means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager=#myManager option. Note We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. You can find an example with the Apache Camel source code in the examples directory under the camel-example-netty-custom-correlation directory. 46.12. Spring Boot Auto-Configuration When using netty with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-starter</artifactId> </dependency> The component supports 74 options, which are listed below. Name Description Default Type camel.component.netty.allow-default-codec The netty component installs a default codec if both, encoder/decoder is null and textline is false. Setting allowDefaultCodec to false prevents the netty component from installing a default codec as the first element in the filter chain. true Boolean camel.component.netty.allow-serialized-headers Only used for TCP when transferExchange is true. When set to true, serializable objects in headers and properties will be added to the exchange. Otherwise Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty.auto-append-delimiter Whether or not to auto append missing end delimiter when sending using the textline codec. true Boolean camel.component.netty.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.netty.backlog Allows to configure a backlog for netty consumer (server). Note the backlog is just a best effort depending on the OS. Setting this option to a value such as 200, 500 or 1000, tells the TCP stack how long the accept queue can be If this option is not configured, then the backlog depends on OS setting. Integer camel.component.netty.boss-count When netty works on nio mode, it uses default bossCount parameter from Netty, which is 1. User can use this option to override the default bossCount from Netty. 1 Integer camel.component.netty.boss-group Set the BossGroup which could be used for handling the new connection of the server side across the NettyEndpoint. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup camel.component.netty.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.netty.broadcast Setting to choose Multicast over UDP. false Boolean camel.component.netty.channel-group To use a explicit ChannelGroup. The option is a io.netty.channel.group.ChannelGroup type. ChannelGroup camel.component.netty.client-initializer-factory To use a custom ClientInitializerFactory. The option is a org.apache.camel.component.netty.ClientInitializerFactory type. ClientInitializerFactory camel.component.netty.client-mode If the clientMode is true, netty consumer will connect the address as a TCP client. false Boolean camel.component.netty.configuration To use the NettyConfiguration as configuration when creating endpoints. The option is a org.apache.camel.component.netty.NettyConfiguration type. NettyConfiguration camel.component.netty.connect-timeout Time to wait for a socket connection to be available. Value is in milliseconds. 10000 Integer camel.component.netty.correlation-manager To use a custom correlation manager to manage how request and reply messages are mapped when using request/reply with the netty producer. This should only be used if you have a way to map requests together with replies such as if there is correlation ids in both the request and reply messages. This can be used if you want to multiplex concurrent messages on the same channel (aka connection) in netty. When doing this you must have a way to correlate the request and reply messages so you can store the right reply on the inflight Camel Exchange before its continued routed. We recommend extending the TimeoutCorrelationManagerSupport when you build custom correlation managers. This provides support for timeout and other complexities you otherwise would need to implement as well. See also the producerPoolEnabled option for more details. The option is a org.apache.camel.component.netty.NettyCamelStateCorrelationManager type. NettyCamelStateCorrelationManager camel.component.netty.decoder-max-line-length The max line length to use for the textline codec. 1024 Integer camel.component.netty.decoders A list of decoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty.delimiter The delimiter to use for the textline codec. Possible values are LINE and NULL. TextLineDelimiter camel.component.netty.disconnect Whether or not to disconnect(close) from Netty Channel right after use. Can be used for both consumer and producer. false Boolean camel.component.netty.disconnect-on-no-reply If sync is enabled then this option dictates NettyConsumer if it should disconnect where there is no reply to send back. true Boolean camel.component.netty.enabled Whether to enable auto configuration of the netty component. This is enabled by default. Boolean camel.component.netty.enabled-protocols Which protocols to enable when using SSL. TLSv1,TLSv1.1,TLSv1.2 String camel.component.netty.encoders A list of encoders to be used. You can use a String which have values separated by comma, and have the values be looked up in the Registry. Just remember to prefix the value with # so Camel knows it should lookup. String camel.component.netty.encoding The encoding (a charset name) to use for the textline codec. If not provided, Camel will use the JVM default Charset. String camel.component.netty.executor-service To use the given EventExecutorGroup. The option is a io.netty.util.concurrent.EventExecutorGroup type. EventExecutorGroup camel.component.netty.hostname-verification To enable/disable hostname verification on SSLEngine. false Boolean camel.component.netty.keep-alive Setting to ensure socket is not closed due to inactivity. true Boolean camel.component.netty.key-store-file Client side certificate keystore to be used for encryption. File camel.component.netty.key-store-format Keystore format to be used for payload encryption. Defaults to JKS if not set. String camel.component.netty.key-store-resource Client side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty.lazy-channel-creation Channels can be lazily created to avoid exceptions, if the remote server is not up and running when the Camel producer is started. true Boolean camel.component.netty.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.netty.maximum-pool-size Sets a maximum thread pool size for the netty consumer ordered thread pool. The default size is 2 x cpu_core plus 1. Setting this value to eg 10 will then use 10 threads unless 2 x cpu_core plus 1 is a higher value, which then will override and be used. For example if there are 8 cores, then the consumer thread pool will be 17. This thread pool is used to route messages received from Netty by Camel. We use a separate thread pool to ensure ordering of messages and also in case some messages will block, then nettys worker threads (event loop) wont be affected. Integer camel.component.netty.native-transport Whether to use native transport instead of NIO. Native transport takes advantage of the host operating system and is only supported on some platforms. You need to add the netty JAR for the host operating system you are using. See more details at: . false Boolean camel.component.netty.need-client-auth Configures whether the server needs client authentication when using SSL. false Boolean camel.component.netty.netty-server-bootstrap-factory To use a custom NettyServerBootstrapFactory. The option is a org.apache.camel.component.netty.NettyServerBootstrapFactory type. NettyServerBootstrapFactory camel.component.netty.network-interface When using UDP then this option can be used to specify a network interface by its name, such as eth0 to join a multicast group. String camel.component.netty.no-reply-log-level If sync is enabled this option dictates NettyConsumer which logging level to use when logging a there is no reply to send back. LoggingLevel camel.component.netty.options Allows to configure additional netty options using option. as prefix. For example option.child.keepAlive=false to set the netty option child.keepAlive=false. See the Netty documentation for possible options that can be used. Map camel.component.netty.passphrase Password setting to use in order to encrypt/decrypt payloads sent using SSH. String camel.component.netty.producer-pool-enabled Whether producer pool is enabled or not. Important: If you turn this off then a single shared connection is used for the producer, also if you are doing request/reply. That means there is a potential issue with interleaved responses if replies comes back out-of-order. Therefore you need to have a correlation id in both the request and reply messages so you can properly correlate the replies to the Camel callback that is responsible for continue processing the message in Camel. To do this you need to implement NettyCamelStateCorrelationManager as correlation manager and configure it via the correlationManager option. See also the correlationManager option for more details. true Boolean camel.component.netty.producer-pool-max-idle Sets the cap on the number of idle instances in the pool. 100 Integer camel.component.netty.producer-pool-max-total Sets the cap on the number of objects that can be allocated by the pool (checked out to clients, or idle awaiting checkout) at a given time. Use a negative value for no limit. -1 Integer camel.component.netty.producer-pool-min-evictable-idle Sets the minimum amount of time (value in millis) an object may sit idle in the pool before it is eligible for eviction by the idle object evictor. 300000 Long camel.component.netty.producer-pool-min-idle Sets the minimum number of instances allowed in the producer pool before the evictor thread (if active) spawns new objects. Integer camel.component.netty.receive-buffer-size The TCP/UDP buffer sizes to be used during inbound communication. Size is bytes. 65536 Integer camel.component.netty.receive-buffer-size-predictor Configures the buffer size predictor. See details at Jetty documentation and this mail thread. Integer camel.component.netty.reconnect Used only in clientMode in consumer, the consumer will attempt to reconnect on disconnection if this is enabled. true Boolean camel.component.netty.reconnect-interval Used if reconnect and clientMode is enabled. The interval in milli seconds to attempt reconnection. 10000 Integer camel.component.netty.request-timeout Allows to use a timeout for the Netty producer when calling a remote server. By default no timeout is in use. The value is in milli seconds, so eg 30000 is 30 seconds. The requestTimeout is using Netty's ReadTimeoutHandler to trigger the timeout. Long camel.component.netty.reuse-address Setting to facilitate socket multiplexing. true Boolean camel.component.netty.reuse-channel This option allows producers and consumers (in client mode) to reuse the same Netty Channel for the lifecycle of processing the Exchange. This is useful if you need to call a server multiple times in a Camel route and want to use the same network connection. When using this, the channel is not returned to the connection pool until the Exchange is done; or disconnected if the disconnect option is set to true. The reused Channel is stored on the Exchange as an exchange property with the key NettyConstants#NETTY_CHANNEL which allows you to obtain the channel during routing and use it as well. false Boolean camel.component.netty.security-provider Security provider to be used for payload encryption. Defaults to SunX509 if not set. String camel.component.netty.send-buffer-size The TCP/UDP buffer sizes to be used during outbound communication. Size is bytes. 65536 Integer camel.component.netty.server-closed-channel-exception-caught-log-level If the server (NettyConsumer) catches an java.nio.channels.ClosedChannelException then its logged using this logging level. This is used to avoid logging the closed channel exceptions, as clients can disconnect abruptly and then cause a flood of closed exceptions in the Netty server. LoggingLevel camel.component.netty.server-exception-caught-log-level If the server (NettyConsumer) catches an exception then its logged using this logging level. LoggingLevel camel.component.netty.server-initializer-factory To use a custom ServerInitializerFactory. The option is a org.apache.camel.component.netty.ServerInitializerFactory type. ServerInitializerFactory camel.component.netty.ssl Setting to specify whether SSL encryption is applied to this endpoint. false Boolean camel.component.netty.ssl-client-cert-headers When enabled and in SSL mode, then the Netty consumer will enrich the Camel Message with headers having information about the client certificate such as subject name, issuer name, serial number, and the valid date range. false Boolean camel.component.netty.ssl-context-parameters To configure security using SSLContextParameters. The option is a org.apache.camel.support.jsse.SSLContextParameters type. SSLContextParameters camel.component.netty.ssl-handler Reference to a class that could be used to return an SSL Handler. The option is a io.netty.handler.ssl.SslHandler type. SslHandler camel.component.netty.sync Setting to set endpoint as one-way or request-response. true Boolean camel.component.netty.tcp-no-delay Setting to improve TCP protocol performance. true Boolean camel.component.netty.textline Only used for TCP. If no codec is specified, you can use this flag to indicate a text line based codec; if not specified or the value is false, then Object Serialization is assumed over TCP - however only Strings are allowed to be serialized by default. false Boolean camel.component.netty.transfer-exchange Only used for TCP. You can transfer the exchange over the wire instead of just the body. The following fields are transferred: In body, Out body, fault body, In headers, Out headers, fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.netty.trust-store-file Server side certificate keystore to be used for encryption. File camel.component.netty.trust-store-resource Server side certificate keystore to be used for encryption. Is loaded by default from classpath, but you can prefix with classpath:, file:, or http: to load the resource from different systems. String camel.component.netty.udp-byte-array-codec For UDP only. If enabled the using byte array codec instead of Java serialization protocol. false Boolean camel.component.netty.udp-connectionless-sending This option supports connection less udp sending which is a real fire and forget. A connected udp send receive the PortUnreachableException if no one is listen on the receiving port. false Boolean camel.component.netty.use-byte-buf If the useByteBuf is true, netty producer will turn the message body into ByteBuf before sending it out. false Boolean camel.component.netty.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.netty.using-executor-service Whether to use ordered thread pool, to ensure events are processed orderly on the same channel. true Boolean camel.component.netty.worker-count When netty works on nio mode, it uses default workerCount parameter from Netty (which is cpu_core_threads x 2). User can use this option to override the default workerCount from Netty. Integer camel.component.netty.worker-group To use a explicit EventLoopGroup as the boss thread pool. For example to share a thread pool with multiple consumers or producers. By default each consumer or producer has their own worker pool with 2 x cpu count core threads. The option is a io.netty.channel.EventLoopGroup type. EventLoopGroup
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-netty</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>", "netty:tcp://0.0.0.0:99999[?options] netty:udp://remotehost:99999/[?options]", "netty:protocol://host:port", "@BindToRegistry(\"decoder\") public ChannelHandler getDecoder() throws Exception { return new DefaultChannelHandlerFactory() { @Override public ChannelHandler newChannelHandler() { return new DatagramPacketObjectDecoder(ClassResolvers.weakCachingResolver(null)); } }; } RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"netty:udp://0.0.0.0:5155?sync=true&decoders=#decoder\") .process(new Processor() { public void process(Exchange exchange) throws Exception { Poetry poetry = (Poetry) exchange.getIn().getBody(); // Process poetry in some way exchange.getOut().setBody(\"Message received); } } } };", "RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"netty:tcp://0.0.0.0:5150\") .to(\"mock:result\"); } };", "KeyStoreParameters ksp = new KeyStoreParameters(); ksp.setResource(\"/users/home/server/keystore.jks\"); ksp.setPassword(\"keystorePassword\"); KeyManagersParameters kmp = new KeyManagersParameters(); kmp.setKeyStore(ksp); kmp.setKeyPassword(\"keyPassword\"); SSLContextParameters scp = new SSLContextParameters(); scp.setKeyManagers(kmp); NettyComponent nettyComponent = getContext().getComponent(\"netty\", NettyComponent.class); nettyComponent.setSslContextParameters(scp);", "<camel:sslContextParameters id=\"sslContextParameters\"> <camel:keyManagers keyPassword=\"keyPassword\"> <camel:keyStore resource=\"/users/home/server/keystore.jks\" password=\"keystorePassword\"/> </camel:keyManagers> </camel:sslContextParameters> <to uri=\"netty:tcp://0.0.0.0:5150?sync=true&ssl=true&sslContextParameters=#sslContextParameters\"/>", "Registry registry = context.getRegistry(); registry.bind(\"password\", \"changeit\"); registry.bind(\"ksf\", new File(\"src/test/resources/keystore.jks\")); registry.bind(\"tsf\", new File(\"src/test/resources/keystore.jks\")); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = \"netty:tcp://0.0.0.0:5150?sync=true&ssl=true&passphrase=#password\" + \"&keyStoreFile=#ksf&trustStoreFile=#tsf\"; String return_string = \"When You Go Home, Tell Them Of Us And Say,\" + \"For Your Tomorrow, We Gave Our Today.\"; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });", "SSLSession session = exchange.getIn().getHeader(NettyConstants.NETTY_SSL_SESSION, SSLSession.class); // get the first certificate which is client certificate javax.security.cert.X509Certificate cert = session.getPeerCertificateChain()[0]; Principal principal = cert.getSubjectDN();", "ChannelHandlerFactory lengthDecoder = ChannelHandlerFactories.newLengthFieldBasedFrameDecoder(1048576, 0, 4, 0, 4); StringDecoder stringDecoder = new StringDecoder(); registry.bind(\"length-decoder\", lengthDecoder); registry.bind(\"string-decoder\", stringDecoder); LengthFieldPrepender lengthEncoder = new LengthFieldPrepender(4); StringEncoder stringEncoder = new StringEncoder(); registry.bind(\"length-encoder\", lengthEncoder); registry.bind(\"string-encoder\", stringEncoder); List<ChannelHandler> decoders = new ArrayList<ChannelHandler>(); decoders.add(lengthDecoder); decoders.add(stringDecoder); List<ChannelHandler> encoders = new ArrayList<ChannelHandler>(); encoders.add(lengthEncoder); encoders.add(stringEncoder); registry.bind(\"encoders\", encoders); registry.bind(\"decoders\", decoders);", "<util:list id=\"decoders\" list-class=\"java.util.LinkedList\"> <bean class=\"org.apache.camel.component.netty.ChannelHandlerFactories\" factory-method=\"newLengthFieldBasedFrameDecoder\"> <constructor-arg value=\"1048576\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> </bean> <bean class=\"io.netty.handler.codec.string.StringDecoder\"/> </util:list> <util:list id=\"encoders\" list-class=\"java.util.LinkedList\"> <bean class=\"io.netty.handler.codec.LengthFieldPrepender\"> <constructor-arg value=\"4\"/> </bean> <bean class=\"io.netty.handler.codec.string.StringEncoder\"/> </util:list> <bean id=\"length-encoder\" class=\"io.netty.handler.codec.LengthFieldPrepender\"> <constructor-arg value=\"4\"/> </bean> <bean id=\"string-encoder\" class=\"io.netty.handler.codec.string.StringEncoder\"/> <bean id=\"length-decoder\" class=\"org.apache.camel.component.netty.ChannelHandlerFactories\" factory-method=\"newLengthFieldBasedFrameDecoder\"> <constructor-arg value=\"1048576\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> <constructor-arg value=\"0\"/> <constructor-arg value=\"4\"/> </bean> <bean id=\"string-decoder\" class=\"io.netty.handler.codec.string.StringDecoder\"/>", "from(\"direct:multiple-codec\").to(\"netty:tcp://0.0.0.0:{{port}}?encoders=#encoders&sync=false\"); from(\"netty:tcp://0.0.0.0:{{port}}?decoders=#length-decoder,#string-decoder&sync=false\").to(\"mock:multiple-codec\");", "<camelContext id=\"multiple-netty-codecs-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:multiple-codec\"/> <to uri=\"netty:tcp://0.0.0.0:5150?encoders=#encoders&amp;sync=false\"/> </route> <route> <from uri=\"netty:tcp://0.0.0.0:5150?decoders=#length-decoder,#string-decoder&amp;sync=false\"/> <to uri=\"mock:multiple-codec\"/> </route> </camelContext>", "from(\"netty:tcp://0.0.0.0:8080\").process(new Processor() { public void process(Exchange exchange) throws Exception { String body = exchange.getIn().getBody(String.class); exchange.getOut().setBody(\"Bye \" + body); // some condition which determines if we should close if (close) { exchange.getOut().setHeader(NettyConstants.NETTY_CLOSE_CHANNEL_WHEN_COMPLETE, true); } } });", "public class SampleServerInitializerFactory extends ServerInitializerFactory { private int maxLineSize = 1024; protected void initChannel(Channel ch) throws Exception { ChannelPipeline channelPipeline = ch.pipeline(); channelPipeline.addLast(\"encoder-SD\", new StringEncoder(CharsetUtil.UTF_8)); channelPipeline.addLast(\"decoder-DELIM\", new DelimiterBasedFrameDecoder(maxLineSize, true, Delimiters.lineDelimiter())); channelPipeline.addLast(\"decoder-SD\", new StringDecoder(CharsetUtil.UTF_8)); // here we add the default Camel ServerChannelHandler for the consumer, to allow Camel to route the message etc. channelPipeline.addLast(\"handler\", new ServerChannelHandler(consumer)); } }", "Registry registry = camelContext.getRegistry(); ServerInitializerFactory factory = new TestServerInitializerFactory(); registry.bind(\"spf\", factory); context.addRoutes(new RouteBuilder() { public void configure() { String netty_ssl_endpoint = \"netty:tcp://0.0.0.0:5150?serverInitializerFactory=#spf\" String return_string = \"When You Go Home, Tell Them Of Us And Say,\" + \"For Your Tomorrow, We Gave Our Today.\"; from(netty_ssl_endpoint) .process(new Processor() { public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(return_string); } } } });", "<!-- use the worker pool builder to help create the shared thread pool --> <bean id=\"poolBuilder\" class=\"org.apache.camel.component.netty.NettyWorkerPoolBuilder\"> <property name=\"workerCount\" value=\"2\"/> </bean> <!-- the shared worker thread pool --> <bean id=\"sharedPool\" class=\"org.jboss.netty.channel.socket.nio.WorkerPool\" factory-bean=\"poolBuilder\" factory-method=\"build\" destroy-method=\"shutdown\"> </bean>", "<route> <from uri=\"netty:tcp://0.0.0.0:5021?textline=true&amp;sync=true&amp;workerPool=#sharedPool&amp;usingExecutorService=false\"/> <to uri=\"log:result\"/> </route>", "<route> <from uri=\"netty:tcp://0.0.0.0:5022?textline=true&amp;sync=true&amp;workerPool=#sharedPool&amp;usingExecutorService=false\"/> <to uri=\"log:result\"/> </route>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-netty-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-netty-component-starter
Chapter 2. Opting out of Telemetry
Chapter 2. Opting out of Telemetry The decision to opt out of telemetry should be based on your specific needs and requirements, as well as any applicable regulations or policies that you need to comply with. 2.1. Consequences of disabling Telemetry In Red Hat Advanced Cluster Security for Kubernetes (RHACS) version 4.0, you can opt out of Telemetry. However, telemetry is embedded as a core component, so opting out is strongly discouraged. Opting out of telemetry limits the ability of Red Hat to understand how everyone uses the product and which areas to prioritize for improvements. 2.2. Disabling Telemetry If you have configured Telemetry by setting the key in your environment, you can disable Telemetry data collection from the Red Hat Advanced Cluster Security for Kubernetes (RHACS) user interface (UI). Procedure In the RHACS portal, go to Platform Configuration > System Configuration . In the System Configuration header, click Edit . Scroll down and ensure that Online Telemetry Data Collection is set to Disabled.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/telemetry/opting-out-of-telemetry
Chapter 3. Project deployment without Business Central
Chapter 3. Project deployment without Business Central As an alternative to developing and deploying projects in the Business Central interface, you can use independent Maven projects or your own Java applications to develop Red Hat Decision Manager projects and deploy them in KIE containers (deployment units) to a configured KIE Server. You can then use the KIE Server REST API to start, stop, or remove the KIE containers that contain the services and their project versions that you have built and deployed. This flexibility enables you to continue to use your existing application workflow to develop business assets using Red Hat Decision Manager features. Projects in Business Central are packaged automatically when you build and deploy the projects. For projects outside of Business Central, such as independent Maven projects or projects within a Java application, you must configure the KIE module descriptor settings in an appended kmodule.xml file or directly in your Java application in order to build and deploy the projects. 3.1. Configuring a KIE module descriptor file A KIE module is a Maven project or module with an additional metadata file META-INF/kmodule.xml . All Red Hat Decision Manager projects require a kmodule.xml file in order to be properly packaged and deployed. This kmodule.xml file is a KIE module descriptor that defines the KIE base and KIE session configurations for the assets in a project. A KIE base is a repository that contains all rules and other business assets in Red Hat Decision Manager but does not contain any runtime data. A KIE session stores and executes runtime data and is created from a KIE base or directly from a KIE container if you have defined the KIE session in the kmodule.xml file. If you create projects outside of Business Central, such as independent Maven projects or projects within a Java application, you must configure the KIE module descriptor settings in an appended kmodule.xml file or directly in your Java application in order to build and deploy the projects. Procedure In the ~/resources/META-INF directory of your project, create a kmodule.xml metadata file with at least the following content: <?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="http://www.drools.org/xsd/kmodule"> </kmodule> This empty kmodule.xml file is sufficient to produce a single default KIE base that includes all files found under your project resources path. The default KIE base also includes a single default KIE session that is triggered when you create a KIE container in your application at build time. The following example is a more advanced kmodule.xml file: <?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.drools.org/xsd/kmodule"> <configuration> <property key="drools.evaluator.supersetOf" value="org.mycompany.SupersetOfEvaluatorDefinition"/> </configuration> <kbase name="KBase1" default="true" eventProcessingMode="cloud" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg1"> <ksession name="KSession1_1" type="stateful" default="true" /> <ksession name="KSession1_2" type="stateful" default="true" beliefSystem="jtms" /> </kbase> <kbase name="KBase2" default="false" eventProcessingMode="stream" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1"> <ksession name="KSession2_1" type="stateless" default="true" clockType="realtime"> <fileLogger file="debugInfo" threaded="true" interval="10" /> <workItemHandlers> <workItemHandler name="name" type="new org.domain.WorkItemHandler()" /> </workItemHandlers> <listeners> <ruleRuntimeEventListener type="org.domain.RuleRuntimeListener" /> <agendaEventListener type="org.domain.FirstAgendaListener" /> <agendaEventListener type="org.domain.SecondAgendaListener" /> <processEventListener type="org.domain.ProcessListener" /> </listeners> </ksession> </kbase> </kmodule> This example defines two KIE bases. Specific packages of rule assets are included with both KIE bases. When you specify packages in this way, you must organize your rule files in a folder structure that reflects the specified packages. Two KIE sessions are instantiated from the KBase1 KIE base, and one KIE session from KBase2 . The KIE session from KBase2 is a stateless KIE session, which means that data from a invocation of the KIE session (the session state) is discarded between session invocations. That KIE session also specifies a file (or a console) logger, a WorkItemHandler , and listeners of the three supported types shown: ruleRuntimeEventListener , agendaEventListener and processEventListener . The <configuration> element defines optional properties that you can use to further customize your kmodule.xml file. As an alternative to manually appending a kmodule.xml file to your project, you can use a KieModuleModel instance within your Java application to programmatically create a kmodule.xml file that defines the KIE base and a KIE session, and then add all resources in your project to the KIE virtual file system KieFileSystem . Creating kmodule.xml programmatically and adding it to KieFileSystem import org.kie.api.KieServices; import org.kie.api.builder.model.KieModuleModel; import org.kie.api.builder.model.KieBaseModel; import org.kie.api.builder.model.KieSessionModel; import org.kie.api.builder.KieFileSystem; KieServices kieServices = KieServices.Factory.get(); KieModuleModel kieModuleModel = kieServices.newKieModuleModel(); KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel("KBase1") .setDefault(true) .setEqualsBehavior(EqualityBehaviorOption.EQUALITY) .setEventProcessingMode(EventProcessingOption.STREAM); KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel("KSession1_1") .setDefault(true) .setType(KieSessionModel.KieSessionType.STATEFUL) .setClockType(ClockTypeOption.get("realtime")); KieFileSystem kfs = kieServices.newKieFileSystem(); kfs.writeKModuleXML(kieModuleModel.toXML()); After you configure the kmodule.xml file either manually or programmatically in your project, retrieve the KIE bases and KIE sessions from the KIE container to verify the configurations: KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer(); KieBase kBase1 = kContainer.getKieBase("KBase1"); KieSession kieSession1 = kContainer.newKieSession("KSession1_1"), kieSession2 = kContainer.newKieSession("KSession1_2"); KieBase kBase2 = kContainer.getKieBase("KBase2"); StatelessKieSession kieSession3 = kContainer.newStatelessKieSession("KSession2_1"); If KieBase or KieSession have been configured as default="true" in the kmodule.xml file, as in the kmodule.xml example, you can retrieve them from the KIE container without passing any names: KieContainer kContainer = ... KieBase kBase1 = kContainer.getKieBase(); KieSession kieSession1 = kContainer.newKieSession(), kieSession2 = kContainer.newKieSession(); KieBase kBase2 = kContainer.getKieBase(); StatelessKieSession kieSession3 = kContainer.newStatelessKieSession(); To increase or decrease the maximum number of KIE modules or artifact versions that are cached in the decision engine, you can modify the values of the following system properties in your Red Hat Decision Manager distribution: kie.repository.project.cache.size : Maximum number of KIE modules that are cached in the decision engine. Default value: 100 kie.repository.project.versions.cache.size : Maximum number of versions of the same artifact that are cached in the decision engine. Default value: 10 For the full list of KIE repository configurations, download the Red Hat Process Automation Manager 7.13.5 Source Distribution ZIP file from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/drools-USDVERSION/drools-compiler/src/main/java/org/drools/compiler/kie/builder/impl/KieRepositoryImpl.java . For more information about the kmodule.xml file, download the Red Hat Process Automation Manager 7.13.5 Source Distribution ZIP file from the Red Hat Customer Portal (if not downloaded already) and see the kmodule.xsd XML schema located at USDFILE_HOME/rhpam-USDVERSION-sources/kie-api-parent-USDVERSION/kie-api/src/main/resources/org/kie/api/ . Note KieBase or KiePackage serialization is not supported in Red Hat Decision Manager 7.13. For more information, see Is serialization of kbase/package supported in BRMS 6/BPM Suite 6/RHDM 7? . 3.1.1. KIE module configuration properties The optional <configuration> element in the KIE module descriptor file ( kmodule.xml ) of your project defines property key and value pairs that you can use to further customize your kmodule.xml file. Example configuration property in a kmodule.xml file <kmodule> ... <configuration> <property key="drools.dialect.default" value="java"/> ... </configuration> ... </kmodule> The following are the <configuration> property keys and values supported in the KIE module descriptor file ( kmodule.xml ) for your project: drools.dialect.default Sets the default Drools dialect. Supported values: java , mvel <property key="drools.dialect.default" value="java"/> drools.accumulate.function.USDFUNCTION Links a class that implements an accumulate function to a specified function name, which allows you to add custom accumulate functions into the decision engine. <property key="drools.accumulate.function.hyperMax" value="org.drools.custom.HyperMaxAccumulate"/> drools.evaluator.USDEVALUATION Links a class that implements an evaluator definition to a specified evaluator name so that you can add custom evaluators into the decision engine. An evaluator is similar to a custom operator. <property key="drools.evaluator.soundslike" value="org.drools.core.base.evaluators.SoundslikeEvaluatorsDefinition"/> drools.dump.dir Sets a path to the Red Hat Decision Manager dump/log directory. <property key="drools.dump.dir" value="USDDIR_PATH/dump/log"/> drools.defaultPackageName Sets a default package for the business assets in your project. <property key="drools.defaultPackageName" value="org.domain.pkg1"/> drools.parser.processStringEscapes Sets the String escape function. If this property is set to false , the \n character will not be interpreted as the newline character. Supported values: true (default), false <property key="drools.parser.processStringEscapes" value="true"/> drools.kbuilder.severity.USDDUPLICATE Sets a severity for instances of duplicate rules, processes, or functions reported when a KIE base is built. For example, if you set duplicateRule to ERROR , then an error is generated for any duplicated rules detected when the KIE base is built. Supported key suffixes: duplicateRule , duplicateProcess , duplicateFunction Supported values: INFO , WARNING , ERROR <property key="drools.kbuilder.severity.duplicateRule" value="ERROR"/> drools.propertySpecific Sets the property reactivity of the decision engine. Supported values: DISABLED , ALLOWED , ALWAYS <property key="drools.propertySpecific" value="ALLOWED"/> drools.lang.level Sets the DRL language level. Supported values: DRL5 , DRL6 , DRL6_STRICT (default) <property key="drools.lang.level" value="DRL_STRICT"/> 3.1.2. KIE base attributes supported in KIE modules A KIE base is a repository that you define in the KIE module descriptor file ( kmodule.xml ) for your project and contains all rules and other business assets in Red Hat Decision Manager. When you define KIE bases in the kmodule.xml file, you can specify certain attributes and values to further customize your KIE base configuration. Example KIE base configuration in a kmodule.xml file <kmodule> ... <kbase name="KBase2" default="false" eventProcessingMode="stream" equalsBehavior="equality" declarativeAgenda="enabled" packages="org.domain.pkg2, org.domain.pkg3" includes="KBase1" sequential="false"> ... </kbase> ... </kmodule> The following are the kbase attributes and values supported in the KIE module descriptor file ( kmodule.xml ) for your project: Table 3.1. KIE base attributes supported in KIE modules Attribute Supported values Description name Any name Defines the name that retrieves KieBase from KieContainer . This attribute is mandatory . includes Comma-separated list of other KIE base objects in the KIE module Defines other KIE base objects and artifacts to be included in this KIE base. A KIE base can be contained in multiple KIE modules if you declare it as a dependency in the pom.xml file of the modules. packages Comma-separated list of packages to include in the KIE base Default: all Defines packages of artifacts (such as rules and processes) to be included in this KIE base. By default, all artifacts in the ~/resources directory are included into a KIE base. This attribute enables you to limit the number of compiled artifacts. Only the packages belonging to the list specified in this attribute are compiled. default true , false Default: false Determines whether a KIE base is the default KIE base for a module so that it can be created from the KIE container without passing any name. Each module can have only one default KIE base. equalsBehavior identity , equality Default: identity Defines the behavior of Red Hat Decision Manager when a new fact is inserted into the working memory. If set to identity , a new FactHandle is always created unless the same object is already present in the working memory. If set to equality , a new FactHandle is created only if the newly inserted object is not equal to an existing fact, according to the equals() method of the inserted fact. Use equality mode when you want objects to be assessed based on feature equality instead of explicit identity. eventProcessingMode cloud , stream Default: cloud Determines how events are processed in the KIE base. If this property is set to cloud , the KIE base treats events as normal facts. If this property is set to stream , temporal reasoning on events is allowed. declarativeAgenda disabled , enabled Default: disabled Determines whether the declarative agenda is enabled or not. sequential true , false Default: false Determines whether sequential mode is enabled or not. In sequential mode, the decision engine evaluates rules one time in the order that they are listed in the decision engine agenda without regard to changes in the working memory. Enable this property if you use stateless KIE sessions and you do not want the execution of rules to influence subsequent rules in the agenda. 3.1.3. KIE session attributes supported in KIE modules A KIE session stores and executes runtime data and is created from a KIE base or directly from a KIE container if you have defined the KIE session in the KIE module descriptor file ( kmodule.xml ) for your project. When you define KIE bases and KIE sessions in the kmodule.xml file, you can specify certain attributes and values to further customize your KIE session configuration. Example KIE session configuration in a kmodule.xml file <kmodule> ... <kbase> ... <ksession name="KSession2_1" type="stateless" default="true" clockType="realtime"> ... </kbase> ... </kmodule> The following are the ksession attributes and values supported in the KIE module descriptor file ( kmodule.xml ) for your project: Table 3.2. KIE session attributes supported in KIE modules Attribute Supported values Description name Any name Defines the name that retrieves KieSession from KieContainer . This attribute is mandatory . type stateful , stateless Default: stateful Determines whether data is retained ( stateful ) or discarded ( stateless ) between invocations of the KIE session. A session set to stateful enables you to iteratively work with the working memory, while a session set to stateless is typically used for one-off execution of assets. A stateless session stores a knowledge state that is changed every time a new fact is added, updated, or deleted, and every time a rule is executed. An execution in a stateless session has no information about actions, such rule executions. default true , false Default: false Determines whether a KIE session is the default session for a module so that it can be created from the KIE container without passing any name. Each module can have only one default KIE session. clockType realtime , pseudo Default: realtime Determines whether event time stamps are assigned by the system clock or by a pseudo clock controlled by the application. This clock is especially useful for unit testing on temporal rules. beliefSystem simple , jtms , defeasible Default: simple Defines the type of belief system used by the KIE session. A belief system deduces the truth from knowledge (facts). For example, if a new fact is inserted based on another fact which is later removed from the decision engine, the system can determine that the newly inserted fact should be removed as well. 3.2. Packaging and deploying a Red Hat Decision Manager project in Maven If you want to deploy a Maven project outside of Business Central to a configured KIE Server, you can edit the project pom.xml file to package your project as a KJAR file and add a kmodule.xml file with the KIE base and KIE session configurations for the assets in your project. Prerequisites You have a Maven project that contains Red Hat Decision Manager business assets. KIE Server is installed and kie-server user access is configured. For installation options, see Planning a Red Hat Decision Manager installation . Procedure In the pom.xml file of your Maven project, set the packaging type to kjar and add the kie-maven-plugin build component: <packaging>kjar</packaging> ... <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{rhpam.version}</version> <extensions>true</extensions> </plugin> </plugins> </build> The kjar packaging type activates the kie-maven-plugin component to validate and pre-compile artifact resources. The <version> is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). These settings are required to properly package the Maven project for deployment. Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Optional: If your project contains Decision Model and Notation (DMN) assets, also add the following dependency in the pom.xml file to enable DMN executable models. DMN executable models enable DMN decision table logic in DMN projects to be evaluated more efficiently. <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <scope>provided</scope> <version>USD{rhpam.version}</version> </dependency> In the ~/resources directory of your Maven project, create a META-INF/kmodule.xml metadata file with at least the following content: <?xml version="1.0" encoding="UTF-8"?> <kmodule xmlns="http://www.drools.org/xsd/kmodule"> </kmodule> This kmodule.xml file is a KIE module descriptor that is required for all Red Hat Decision Manager projects. You can use the KIE module to define one or more KIE bases and one or more KIE sessions from each KIE base. For more information about kmodule.xml configuration, see Section 3.1, "Configuring a KIE module descriptor file" . In the relevant resource in your Maven project, configure a .java class to create a KIE container and a KIE session to load the KIE base: import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public void testApp() { // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession(); } In this example, the KIE container reads the files to be built from the class path for a testApp project. The KieServices API enables you to access all KIE building and runtime configurations. You can also create the KIE container by passing the project ReleaseId to the KieServices API. The ReleaseId is generated from the GroupId , ArtifactId , and Version (GAV) values in the project pom.xml file. import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.drools.compiler.kproject.ReleaseIdImpl; public void testApp() { // Identify the project in the local repository: ReleaseId rid = new ReleaseIdImpl("com.sample", "my-app", "1.0.0"); // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession(); } In a command terminal, navigate to your Maven project directory and run the following command to build the project: For DMN executable models, run the following command: If the build fails, address any problems described in the command line error messages and try again to validate the files until the build is successful. Note If the rule assets in your Maven project are not built from an executable rule model by default, verify that the following dependency is in the pom.xml file of your project and rebuild the project: <dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency> This dependency is required for rule assets in Red Hat Decision Manager to be built from executable rule models by default. This dependency is included as part of the Red Hat Decision Manager core packaging, but depending on your Red Hat Decision Manager upgrade history, you may need to manually add this dependency to enable the executable rule model behavior. For more information about executable rule models, see Section 3.4, "Executable rule models" . After you successfully build and test the project locally, deploy the project to the remote Maven repository: 3.3. Packaging and deploying a Red Hat Decision Manager project in a Java application If you want to deploy a project from within your own Java application to a configured KIE Server, you can use a KieModuleModel instance to programmatically create a kmodule.xml file that defines the KIE base and a KIE session, and then add all resources in your project to the KIE virtual file system KieFileSystem . Prerequisites You have a Java application that contains Red Hat Decision Manager business assets. KIE Server is installed and kie-server user access is configured. For installation options, see Planning a Red Hat Decision Manager installation . Procedure Optional: If your project contains Decision Model and Notation (DMN) assets, add the following dependency to the relevant class path of your Java project to enable DMN executable models. DMN executable models enable DMN decision table logic in DMN projects to be evaluated more efficiently. <dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <scope>provided</scope> <version>USD{rhpam.version}</version> </dependency> The <version> is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Use the KieServices API to create a KieModuleModel instance with the desired KIE base and KIE session. The KieServices API enables you to access all KIE building and runtime configurations. The KieModuleModel instance generates the kmodule.xml file for your project. For more information about kmodule.xml configuration, see Section 3.1, "Configuring a KIE module descriptor file" . Convert your KieModuleModel instance into XML and add the XML to KieFileSystem . Creating kmodule.xml programmatically and adding it to KieFileSystem import org.kie.api.KieServices; import org.kie.api.builder.model.KieModuleModel; import org.kie.api.builder.model.KieBaseModel; import org.kie.api.builder.model.KieSessionModel; import org.kie.api.builder.KieFileSystem; KieServices kieServices = KieServices.Factory.get(); KieModuleModel kieModuleModel = kieServices.newKieModuleModel(); KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel("KBase1") .setDefault(true) .setEqualsBehavior(EqualityBehaviorOption.EQUALITY) .setEventProcessingMode(EventProcessingOption.STREAM); KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel("KSession1") .setDefault(true) .setType(KieSessionModel.KieSessionType.STATEFUL) .setClockType(ClockTypeOption.get("realtime")); KieFileSystem kfs = kieServices.newKieFileSystem(); kfs.writeKModuleXML(kieModuleModel.toXML()); Add any remaining Red Hat Decision Manager assets that you use in your project to your KieFileSystem instance. The artifacts must be in a Maven project file structure. import org.kie.api.builder.KieFileSystem; KieFileSystem kfs = ... kfs.write("src/main/resources/KBase1/ruleSet1.drl", stringContainingAValidDRL) .write("src/main/resources/dtable.xls", kieServices.getResources().newInputStreamResource(dtableFileStream)); In this example, the project assets are added both as a String variable and as a Resource instance. You can create the Resource instance using the KieResources factory, also provided by the KieServices instance. The KieResources class provides factory methods to convert InputStream , URL , and File objects, or a String representing a path of your file system to a Resource instance that the KieFileSystem can manage. You can also explicitly assign a ResourceType property to a Resource object when you add project artifacts to KieFileSystem : import org.kie.api.builder.KieFileSystem; KieFileSystem kfs = ... kfs.write("src/main/resources/myDrl.txt", kieServices.getResources().newInputStreamResource(drlStream) .setResourceType(ResourceType.DRL)); Use KieBuilder with the buildAll() method to build the content of KieFileSystem , and create a KIE container to deploy it: import org.kie.api.KieServices; import org.kie.api.KieServices.Factory; import org.kie.api.builder.KieFileSystem; import org.kie.api.builder.KieBuilder; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); KieFileSystem kfs = ... KieBuilder kieBuilder = ks.newKieBuilder( kfs ); kieBuilder.buildAll() assertEquals(0, kieBuilder.getResults().getMessages(Message.Level.ERROR).size()); KieContainer kieContainer = kieServices .newKieContainer(kieServices.getRepository().getDefaultReleaseId()); A build ERROR indicates that the project compilation failed, no KieModule was produced, and nothing was added to the KieRepository singleton. A WARNING or an INFO result indicates that the compilation of the project was successful, with information about the build process. Note To build the rule assets in your Java application project from an executable rule model, verify that the following dependency is in the pom.xml file of your project: <dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency> This dependency is required for rule assets in Red Hat Decision Manager to be built from executable rule models. This dependency is included as part of the Red Hat Decision Manager core packaging, but depending on your Red Hat Decision Manager upgrade history, you may need to manually add this dependency to enable the executable rule model behavior. After you verify the dependency, use the following modified buildAll() option to enable the executable model: kieBuilder.buildAll(ExecutableModelProject.class) For more information about executable rule models, see Section 3.4, "Executable rule models" . 3.4. Executable rule models Rule assets in Red Hat Decision Manager are built from executable rule models by default with the standard kie-maven-plugin plugin. Executable rule models are embedded models that provide a Java-based representation of a rule set for execution at build time. The executable model is a more efficient alternative to the standard asset packaging in versions of Red Hat Decision Manager and enables KIE containers and KIE bases to be created more quickly, especially when you have large lists of DRL (Drools Rule Language) files and other Red Hat Decision Manager assets. If you do not use the kie-maven-plugin plugin or if the required drools-model-compiler dependency is missing from your project, then rule assets are built without executable models. Therefore, to generate the executable model during build time, ensure that the kie-maven-plugin plugin and drools-model-compiler dependency are added in your project pom.xml file. Executable rule models provide the following specific advantages for your projects: Compile time: Traditionally, a packaged Red Hat Decision Manager project (KJAR) contains a list of DRL files and other Red Hat Decision Manager artifacts that define the rule base together with some pre-generated classes implementing the constraints and the consequences. Those DRL files must be parsed and compiled when the KJAR is downloaded from the Maven repository and installed in a KIE container. This process can be slow, especially for large rule sets. With an executable model, you can package within the project KJAR the Java classes that implement the executable model of the project rule base and re-create the KIE container and its KIE bases out of it in a much faster way. In Maven projects, you use the kie-maven-plugin plugin to automatically generate the executable model sources from the DRL files during the compilation process. Run time: In an executable model, all constraints are defined as Java lambda expressions. The same lambda expressions are also used for constraints evaluation, so you no longer need to use mvel expressions for interpreted evaluation nor the just-in-time (JIT) process to transform the mvel -based constraints into bytecode. This creates a quicker and more efficient run time. Development time: An executable model enables you to develop and experiment with new features of the decision engine without needing to encode elements directly in the DRL format or modify the DRL parser to support them. Note For query definitions in executable rule models, you can use up to 10 arguments only. For variables within rule consequences in executable rule models, you can use up to 24 bound variables only (including the built-in drools variable). For example, the following rule consequence uses more than 24 bound variables and creates a compilation error: 3.4.1. Modifying or disabling executable rule models in a Red Hat Decision Manager project Rule assets in Red Hat Decision Manager are built from executable rule models by default with the standard kie-maven-plugin plugin. The executable model is a more efficient alternative to the standard asset packaging in versions of Red Hat Decision Manager. However, if needed, you can modify or disable executable rule models to build a Red Hat Decision Manager project as a DRL-based KJAR instead of the default model-based KJAR. Procedure Build your Red Hat Decision Manager project in the usual way, but provide an alternate build option, depending on the type of project: For a Maven project, navigate to your Maven project directory in a command terminal and run the following command: Replace <VALUE> with one of three values: YES_WITHDRL : (Default) Generates the executable model corresponding to the DRL files in the original project and also adds the DRL files to the generated KJAR for documentation purposes (the KIE base is built from the executable model regardless). YES : Generates the executable model corresponding to the DRL files in the original project and excludes the DRL files from the generated KJAR. NO : Does not generate the executable model. Example build command to disable the default executable model behavior: For a Java application configured programmatically, the executable model is disabled by default. Add rule assets to the KIE virtual file system KieFileSystem and use KieBuilder with one of the following buildAll() methods: buildAll() (Default) or buildAll(DrlProject.class) : Does not generate the executable model. buildAll(ExecutableModelProject.class) : Generates the executable model corresponding to the DRL files in the original project. Example code to enable executable model behavior: import org.kie.api.KieServices; import org.kie.api.builder.KieFileSystem; import org.kie.api.builder.KieBuilder; KieServices ks = KieServices.Factory.get(); KieFileSystem kfs = ks.newKieFileSystem() kfs.write("src/main/resources/KBase1/ruleSet1.drl", stringContainingAValidDRL) .write("src/main/resources/dtable.xls", kieServices.getResources().newInputStreamResource(dtableFileStream)); KieBuilder kieBuilder = ks.newKieBuilder( kfs ); // Enable executable model kieBuilder.buildAll(ExecutableModelProject.class) assertEquals(0, kieBuilder.getResults().getMessages(Message.Level.ERROR).size()); 3.5. Using a KIE scanner to monitor and update KIE containers The KIE scanner in Red Hat Decision Manager monitors your Maven repository for new SNAPSHOT versions of your Red Hat Decision Manager project and then deploys the latest version of the project to a specified KIE container. You can use a KIE scanner in a development environment to maintain your Red Hat Decision Manager project deployments more efficiently as new versions become available. Important For production environments, do not use a KIE scanner with SNAPSHOT project versions to avoid accidental or unexpected project updates. The KIE scanner is intended for development environments that use SNAPSHOT project versions. Prerequisites The kie-ci.jar file is available on the class path of your Red Hat Decision Manager project. Procedure In the relevant .java class in your project, register and start the KIE scanner as shown in the following example code: Registering and starting a KIE scanner for a KIE container import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.builder.KieScanner; ... KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices .newReleaseId("com.sample", "my-app", "1.0-SNAPSHOT"); KieContainer kContainer = kieServices.newKieContainer(releaseId); KieScanner kScanner = kieServices.newKieScanner(kContainer); // Start KIE scanner for polling the Maven repository every 10 seconds (10000 ms) kScanner.start(10000L); In this example, the KIE scanner is configured to run with a fixed time interval. The minimum KIE scanner polling interval is 1 millisecond (ms) and the maximum polling interval is the maximum value of the data type long . A polling interval of 0 or less results in a java.lang.IllegalArgumentException: pollingInterval must be positive error. You can also configure the KIE scanner to run on demand by invoking the scanNow() method. The project group ID, artifact ID, and version (GAV) settings in the example are defined as com.sample:my-app:1.0-SNAPSHOT . The project version must contain the -SNAPSHOT suffix to enable the KIE scanner to retrieve the latest build of the specified artifact version. If you change the snapshot project version number, such as increasing to 1.0.1-SNAPSHOT , then you must also update the version in the GAV definition in your KIE scanner configuration. The KIE scanner does not retrieve updates for projects with static versions, such as com.sample:my-app:1.0 . In the settings.xml file of your Maven repository, set the updatePolicy configuration to always to enable the KIE scanner to function properly: <profile> <id>guvnor-m2-repo</id> <repositories> <repository> <id>guvnor-m2-repo</id> <name>BA Repository</name> <url>http://localhost:8080/business-central/maven2/</url> <layout>default</layout> <releases> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </snapshots> </repository> </repositories> </profile> After the KIE scanner starts polling, if the KIE scanner detects an updated version of the SNAPSHOT project in the specified KIE container, the KIE scanner automatically downloads the new project version and triggers an incremental build of the new project. From that moment, all of the new KieBase and KieSession objects that were created from the KIE container use the new project version. For information about starting or stopping a KIE scanner using KIE Server APIs, see Interacting with Red Hat Decision Manager using KIE APIs . 3.6. Starting a service in KIE Server If you have deployed Red Hat Decision Manager assets from a Maven or Java project outside of Business Central, you use a KIE Server REST API call to start the KIE container (deployment unit) and the services in it. You can use the KIE Server REST API to start services regardless of your deployment type, including deployment from Business Central, but projects deployed from Business Central either are started automatically or can be started within the Business Central interface. Prerequisites KIE Server is installed and kie-server user access is configured. For installation options, see Planning a Red Hat Decision Manager installation . Procedure In your command terminal, run the following API request to load a service into a KIE container in KIE Server and to start it: Replace the following values: <username> , <password>: The user name and password of a user with the kie-server role. <containerID>: The identifier for the KIE container (deployment unit). You can use any random identifier but it must be the same in both places in the command (the URL and the data). <groupID> , <artifactID> , <version>: The project GAV values. <serverhost>: The host name for KIE Server, or localhost if you are running the command on the same host as KIE Server. <serverport>: The port number for KIE Server. Example: 3.7. Stopping and removing a service in KIE Server If you have started Red Hat Decision Manager services from a Maven or Java project outside of Business Central, you use a KIE Server REST API call to stop and remove the KIE container (deployment unit) containing the services. You can use the KIE Server REST API to stop services regardless of your deployment type, including deployment from Business Central, but services from Business Central can also be stopped within the Business Central interface. Prerequisites KIE Server is installed and kie-server user access is configured. For installation options, see Planning a Red Hat Decision Manager installation . Procedure In your command terminal, run the following API request to stop and remove a KIE container with services on KIE Server: Replace the following values: <username> , <password>: The user name and password of a user with the kie-server role. <containerID>: The identifier for the KIE container (deployment unit). You can use any random identifier but it must be the same in both places in the command (the URL and the data). <serverhost>: The host name for KIE Server, or localhost if you are running the command on the same host as KIE Server. <serverport>: The port number for KIE Server. Example:
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <kmodule xmlns=\"http://www.drools.org/xsd/kmodule\"> </kmodule>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <kmodule xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.drools.org/xsd/kmodule\"> <configuration> <property key=\"drools.evaluator.supersetOf\" value=\"org.mycompany.SupersetOfEvaluatorDefinition\"/> </configuration> <kbase name=\"KBase1\" default=\"true\" eventProcessingMode=\"cloud\" equalsBehavior=\"equality\" declarativeAgenda=\"enabled\" packages=\"org.domain.pkg1\"> <ksession name=\"KSession1_1\" type=\"stateful\" default=\"true\" /> <ksession name=\"KSession1_2\" type=\"stateful\" default=\"true\" beliefSystem=\"jtms\" /> </kbase> <kbase name=\"KBase2\" default=\"false\" eventProcessingMode=\"stream\" equalsBehavior=\"equality\" declarativeAgenda=\"enabled\" packages=\"org.domain.pkg2, org.domain.pkg3\" includes=\"KBase1\"> <ksession name=\"KSession2_1\" type=\"stateless\" default=\"true\" clockType=\"realtime\"> <fileLogger file=\"debugInfo\" threaded=\"true\" interval=\"10\" /> <workItemHandlers> <workItemHandler name=\"name\" type=\"new org.domain.WorkItemHandler()\" /> </workItemHandlers> <listeners> <ruleRuntimeEventListener type=\"org.domain.RuleRuntimeListener\" /> <agendaEventListener type=\"org.domain.FirstAgendaListener\" /> <agendaEventListener type=\"org.domain.SecondAgendaListener\" /> <processEventListener type=\"org.domain.ProcessListener\" /> </listeners> </ksession> </kbase> </kmodule>", "import org.kie.api.KieServices; import org.kie.api.builder.model.KieModuleModel; import org.kie.api.builder.model.KieBaseModel; import org.kie.api.builder.model.KieSessionModel; import org.kie.api.builder.KieFileSystem; KieServices kieServices = KieServices.Factory.get(); KieModuleModel kieModuleModel = kieServices.newKieModuleModel(); KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel(\"KBase1\") .setDefault(true) .setEqualsBehavior(EqualityBehaviorOption.EQUALITY) .setEventProcessingMode(EventProcessingOption.STREAM); KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel(\"KSession1_1\") .setDefault(true) .setType(KieSessionModel.KieSessionType.STATEFUL) .setClockType(ClockTypeOption.get(\"realtime\")); KieFileSystem kfs = kieServices.newKieFileSystem(); kfs.writeKModuleXML(kieModuleModel.toXML());", "KieServices kieServices = KieServices.Factory.get(); KieContainer kContainer = kieServices.getKieClasspathContainer(); KieBase kBase1 = kContainer.getKieBase(\"KBase1\"); KieSession kieSession1 = kContainer.newKieSession(\"KSession1_1\"), kieSession2 = kContainer.newKieSession(\"KSession1_2\"); KieBase kBase2 = kContainer.getKieBase(\"KBase2\"); StatelessKieSession kieSession3 = kContainer.newStatelessKieSession(\"KSession2_1\");", "KieContainer kContainer = KieBase kBase1 = kContainer.getKieBase(); KieSession kieSession1 = kContainer.newKieSession(), kieSession2 = kContainer.newKieSession(); KieBase kBase2 = kContainer.getKieBase(); StatelessKieSession kieSession3 = kContainer.newStatelessKieSession();", "<kmodule> <configuration> <property key=\"drools.dialect.default\" value=\"java\"/> </configuration> </kmodule>", "<property key=\"drools.dialect.default\" value=\"java\"/>", "<property key=\"drools.accumulate.function.hyperMax\" value=\"org.drools.custom.HyperMaxAccumulate\"/>", "<property key=\"drools.evaluator.soundslike\" value=\"org.drools.core.base.evaluators.SoundslikeEvaluatorsDefinition\"/>", "<property key=\"drools.dump.dir\" value=\"USDDIR_PATH/dump/log\"/>", "<property key=\"drools.defaultPackageName\" value=\"org.domain.pkg1\"/>", "<property key=\"drools.parser.processStringEscapes\" value=\"true\"/>", "<property key=\"drools.kbuilder.severity.duplicateRule\" value=\"ERROR\"/>", "<property key=\"drools.propertySpecific\" value=\"ALLOWED\"/>", "<property key=\"drools.lang.level\" value=\"DRL_STRICT\"/>", "<kmodule> <kbase name=\"KBase2\" default=\"false\" eventProcessingMode=\"stream\" equalsBehavior=\"equality\" declarativeAgenda=\"enabled\" packages=\"org.domain.pkg2, org.domain.pkg3\" includes=\"KBase1\" sequential=\"false\"> </kbase> </kmodule>", "<kmodule> <kbase> <ksession name=\"KSession2_1\" type=\"stateless\" default=\"true\" clockType=\"realtime\"> </kbase> </kmodule>", "<packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{rhpam.version}</version> <extensions>true</extensions> </plugin> </plugins> </build>", "<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>", "<dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <scope>provided</scope> <version>USD{rhpam.version}</version> </dependency>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <kmodule xmlns=\"http://www.drools.org/xsd/kmodule\"> </kmodule>", "import org.kie.api.KieServices; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; public void testApp() { // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.getKieClasspathContainer(); KieSession kSession = kContainer.newKieSession(); }", "import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.drools.compiler.kproject.ReleaseIdImpl; public void testApp() { // Identify the project in the local repository: ReleaseId rid = new ReleaseIdImpl(\"com.sample\", \"my-app\", \"1.0.0\"); // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession(); }", "mvn clean install", "mvn clean install -DgenerateDMNModel=YES", "<dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency>", "mvn deploy", "<dependency> <groupId>org.kie</groupId> <artifactId>kie-dmn-core</artifactId> <scope>provided</scope> <version>USD{rhpam.version}</version> </dependency>", "<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>", "import org.kie.api.KieServices; import org.kie.api.builder.model.KieModuleModel; import org.kie.api.builder.model.KieBaseModel; import org.kie.api.builder.model.KieSessionModel; import org.kie.api.builder.KieFileSystem; KieServices kieServices = KieServices.Factory.get(); KieModuleModel kieModuleModel = kieServices.newKieModuleModel(); KieBaseModel kieBaseModel1 = kieModuleModel.newKieBaseModel(\"KBase1\") .setDefault(true) .setEqualsBehavior(EqualityBehaviorOption.EQUALITY) .setEventProcessingMode(EventProcessingOption.STREAM); KieSessionModel ksessionModel1 = kieBaseModel1.newKieSessionModel(\"KSession1\") .setDefault(true) .setType(KieSessionModel.KieSessionType.STATEFUL) .setClockType(ClockTypeOption.get(\"realtime\")); KieFileSystem kfs = kieServices.newKieFileSystem(); kfs.writeKModuleXML(kieModuleModel.toXML());", "import org.kie.api.builder.KieFileSystem; KieFileSystem kfs = kfs.write(\"src/main/resources/KBase1/ruleSet1.drl\", stringContainingAValidDRL) .write(\"src/main/resources/dtable.xls\", kieServices.getResources().newInputStreamResource(dtableFileStream));", "import org.kie.api.builder.KieFileSystem; KieFileSystem kfs = kfs.write(\"src/main/resources/myDrl.txt\", kieServices.getResources().newInputStreamResource(drlStream) .setResourceType(ResourceType.DRL));", "import org.kie.api.KieServices; import org.kie.api.KieServices.Factory; import org.kie.api.builder.KieFileSystem; import org.kie.api.builder.KieBuilder; import org.kie.api.runtime.KieContainer; KieServices kieServices = KieServices.Factory.get(); KieFileSystem kfs = KieBuilder kieBuilder = ks.newKieBuilder( kfs ); kieBuilder.buildAll() assertEquals(0, kieBuilder.getResults().getMessages(Message.Level.ERROR).size()); KieContainer kieContainer = kieServices .newKieContainer(kieServices.getRepository().getDefaultReleaseId());", "<dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency>", "kieBuilder.buildAll(ExecutableModelProject.class)", "then USDinput.setNo25Count(functions.sumOf(new Object[]{USDno1Count_1, USDno2Count_1, USDno3Count_1, ..., USDno25Count_1}).intValue()); USDinput.getFirings().add(\"fired\"); update(USDinput);", "mvn clean install -DgenerateModel=<VALUE>", "mvn clean install -DgenerateModel=NO", "import org.kie.api.KieServices; import org.kie.api.builder.KieFileSystem; import org.kie.api.builder.KieBuilder; KieServices ks = KieServices.Factory.get(); KieFileSystem kfs = ks.newKieFileSystem() kfs.write(\"src/main/resources/KBase1/ruleSet1.drl\", stringContainingAValidDRL) .write(\"src/main/resources/dtable.xls\", kieServices.getResources().newInputStreamResource(dtableFileStream)); KieBuilder kieBuilder = ks.newKieBuilder( kfs ); // Enable executable model kieBuilder.buildAll(ExecutableModelProject.class) assertEquals(0, kieBuilder.getResults().getMessages(Message.Level.ERROR).size());", "import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.builder.KieScanner; KieServices kieServices = KieServices.Factory.get(); ReleaseId releaseId = kieServices .newReleaseId(\"com.sample\", \"my-app\", \"1.0-SNAPSHOT\"); KieContainer kContainer = kieServices.newKieContainer(releaseId); KieScanner kScanner = kieServices.newKieScanner(kContainer); // Start KIE scanner for polling the Maven repository every 10 seconds (10000 ms) kScanner.start(10000L);", "<profile> <id>guvnor-m2-repo</id> <repositories> <repository> <id>guvnor-m2-repo</id> <name>BA Repository</name> <url>http://localhost:8080/business-central/maven2/</url> <layout>default</layout> <releases> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>always</updatePolicy> </snapshots> </repository> </repositories> </profile>", "curl --user \"<username>:<password>\" -H \"Content-Type: application/json\" -X PUT -d '{\"container-id\" : \"<containerID>\",\"release-id\" : {\"group-id\" : \"<groupID>\",\"artifact-id\" : \"<artifactID>\",\"version\" : \"<version>\"}}' http://<serverhost>:<serverport>/kie-server/services/rest/server/containers/<containerID>", "curl --user \"rhpamAdmin:password@1\" -H \"Content-Type: application/json\" -X PUT -d '{\"container-id\" : \"kie1\",\"release-id\" : {\"group-id\" : \"org.kie.server.testing\",\"artifact-id\" : \"container-crud-tests1\",\"version\" : \"2.1.0.GA\"}}' http://localhost:39043/kie-server/services/rest/server/containers/kie1", "curl --user \"<username>:<password>\" -X DELETE http://<serverhost>:<serverport>/kie-server/services/rest/server/containers/<containerID>", "curl --user \"rhpamAdmin:password@1\" -X DELETE http://localhost:39043/kie-server/services/rest/server/containers/kie1" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/project-deployment-other-con_packaging-deploying
Appendix A. Component Versions
Appendix A. Component Versions This appendix provides a list of key components and their versions in the Red Hat Enterprise Linux 6.10 release. Table A.1. Component Versions Component Version kernel 2.6.32-754 QLogic qla2xxx driver 8.07.00.26.06.8-k QLogic ql2xxx firmware ql2100-firmware-1.19.38-3.1 ql2200-firmware-2.02.08-3.1 ql23xx-firmware-3.03.27-3.1 ql2400-firmware-7.03.00-1 ql2500-firmware-7.03.00-1 Emulex lpfc driver 0:11.0.1.6 iSCSI initiator utils ( iscsi-initiator-utils ) 6.2.0.873-27 DM-Multipath ( device-mapper-multipath ) 0.4.9-106 LVM ( lvm2 ) 2.02.143-12
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_release_notes/appe-red_hat_enterprise_linux-6.10_release_notes-component_versions
Chapter 8. Uninstalling a cluster on IBM Power Virtual Server
Chapter 8. Uninstalling a cluster on IBM Power Virtual Server You can remove a cluster that you deployed to IBM Power(R) Virtual Server. 8.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IBMCLOUD_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. You might have to run the openshift-install destroy command up to three times to ensure a proper cleanup. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "ibmcloud is volumes --resource-group-name <infrastructure_id>", "ibmcloud is volume-delete --force <volume_id>", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_ibm_power_virtual_server/uninstalling-cluster-ibm-power-vs
Chapter 3. Installing a user-provisioned bare metal cluster with network customizations
Chapter 3. Installing a user-provisioned bare metal cluster with network customizations In OpenShift Container Platform 4.15, you can install a cluster on bare metal infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. When you customize OpenShift Container Platform networking, you must set most of the network configuration parameters during installation. You can modify only kubeProxy network configuration parameters in a running cluster. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. Additional resources See Installing a user-provisioned bare metal cluster on a restricted network for more information about performing a restricted network installation on bare metal infrastructure that you provision. 3.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 3.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 3.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Note As an exception, you can run zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. Running one compute machine is not supported. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 3.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.2. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 3.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources See Configuring a three-node cluster for details about deploying three-node clusters in bare metal environments. See Approving the certificate signing requests for your machines for more information about approving cluster certificate signing requests after installation. 3.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 3.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 3.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 3.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 3.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 3.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. Validating DNS resolution for user-provisioned infrastructure 3.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. Additional resources Requirements for a cluster with user-provisioned infrastructure Installing RHCOS and starting the OpenShift Container Platform bootstrap process Setting the cluster node hostnames through DHCP Advanced RHCOS installation configuration Networking requirements for user-provisioned infrastructure User-provisioned DNS requirements Validating DNS resolution for user-provisioned infrastructure Load balancing requirements for user-provisioned infrastructure 3.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. Additional resources User-provisioned DNS requirements Load balancing requirements for user-provisioned infrastructure 3.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. Additional resources Verifying node health 3.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for bare metal 3.9.1. Sample install-config.yaml file for bare metal You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources See Load balancing requirements for user-provisioned infrastructure for more information on the API and application ingress load balancing requirements. 3.10. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.11. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 3.12. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.12.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.10. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.12. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.13. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.14. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.15. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.16. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 3.17. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 3.18. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 3.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 3.13. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 3.14. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 3.14.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.14.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 3.14.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 3.14.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 3.14.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 3.14.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 3.14.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 3.14.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 3.14.3.4. Default console configuration Red Hat Enterprise Linux CoreOS (RHCOS) nodes installed from an OpenShift Container Platform 4.15 boot image use a default console that is meant to accomodate most virtualized and bare metal setups. Different cloud and virtualization platforms may use different default settings depending on the chosen architecture. Bare metal installations use the kernel default settings which typically means the graphical console is the primary console and the serial console is disabled. The default consoles may not match your specific hardware configuration or you might have specific needs that require you to adjust the default console. For example: You want to access the emergency shell on the console for debugging purposes. Your cloud platform does not provide interactive access to the graphical console, but provides a serial console. You want to enable multiple consoles. Console configuration is inherited from the boot image. This means that new nodes in existing clusters are unaffected by changes to the default console. You can configure the console for bare metal installations in the following ways: Using coreos-installer manually on the command line. Using the coreos-installer iso customize or coreos-installer pxe customize subcommands with the --dest-console option to create a custom image that automates the process. Note For advanced customization, perform console configuration using the coreos-installer iso or coreos-installer pxe subcommands, and not kernel arguments. 3.14.3.5. Enabling the serial console for PXE and ISO installations By default, the Red Hat Enterprise Linux CoreOS (RHCOS) serial console is disabled and all output is written to the graphical console. You can enable the serial console for an ISO installation and reconfigure the bootloader so that output is sent to both the serial console and the graphical console. Procedure Boot the ISO installer. Run the coreos-installer command to install the system, adding the --console option once to specify the graphical console, and a second time to specify the serial console: USD coreos-installer install \ --console=tty0 \ 1 --console=ttyS0,<options> \ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> 1 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 2 The desired primary console. In this case the serial console. The options field defines the baud rate and other settings. A common value for this field is 11520n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see Linux kernel serial console documentation. Reboot into the installed system. Note A similar outcome can be obtained by using the coreos-installer install --append-karg option, and specifying the console with console= . However, this will only set the console for the kernel and not the bootloader. To configure a PXE installation, make sure the coreos.inst.install_dev kernel command line option is omitted, and use the shell prompt to run coreos-installer manually using the above ISO installation procedure. 3.14.3.6. Customizing a live RHCOS ISO or PXE install You can use the live ISO image or PXE environment to install RHCOS by injecting an Ignition config file directly into the image. This creates a customized image that you can use to provision your system. For an ISO image, the mechanism to do this is the coreos-installer iso customize subcommand, which modifies the .iso file with your configuration. Similarly, the mechanism for a PXE environment is the coreos-installer pxe customize subcommand, which creates a new initramfs file that includes your customizations. The customize subcommand is a general purpose tool that can embed other types of customizations as well. The following tasks are examples of some of the more common customizations: Inject custom CA certificates for when corporate security policy requires their use. Configure network settings without the need for kernel arguments. Embed arbitrary preinstall and post-install scripts or binaries. 3.14.3.7. Customizing a live RHCOS ISO image You can customize a live RHCOS ISO image directly with the coreos-installer iso customize subcommand. When you boot the ISO image, the customizations are applied automatically. You can use this feature to configure the ISO image to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and the Ignition config file, and then run the following command to inject the Ignition config directly into the ISO image: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2 1 The Ignition config file that is generated from the openshift-installer installation program. 2 When you specify this option, the ISO image automatically runs an installation. Otherwise, the image remains configured for installation, but does not install automatically unless you specify the coreos.inst.install_dev kernel argument. Optional: To remove the ISO image customizations and return the image to its pristine state, run: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now re-customize the live ISO image or use it in its pristine state. Applying your customizations affects every subsequent boot of RHCOS. 3.14.3.7.1. Modifying a live install ISO image to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image to enable the serial console to receive output: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the ISO image automatically runs the installation program which will fail unless you also specify the coreos.inst.install_dev kernel argument. Note The --dest-console option affects the installed system and not the live ISO system. To modify the console for a live ISO system, use the --live-karg-append option and specify the console with console= . Your customizations are applied and affect every subsequent boot of the ISO image. Optional: To remove the ISO image customizations and return the image to its original state, run the following command: USD coreos-installer iso reset rhcos-<version>-live.x86_64.iso You can now recustomize the live ISO image or use it in its original state. 3.14.3.7.2. Modifying a live install ISO image to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image for use with a custom CA: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 3.14.3.7.3. Modifying a live install ISO image with customized network settings You can embed a NetworkManager keyfile into the live ISO image and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS ISO image from the RHCOS image mirror page and run the following command to customize the ISO image with your configured networking: USD coreos-installer iso customize rhcos-<version>-live.x86_64.iso \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection Network settings are applied to the live system and are carried over to the destination system. 3.14.3.8. Customizing a live RHCOS PXE environment You can customize a live RHCOS PXE environment directly with the coreos-installer pxe customize subcommand. When you boot the PXE environment, the customizations are applied automatically. You can use this feature to configure the PXE environment to automatically install RHCOS. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new initramfs file that contains the customizations from your Ignition config: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition bootstrap.ign \ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3 1 The Ignition config file that is generated from openshift-installer . 2 When you specify this option, the PXE environment automatically runs an install. Otherwise, the image remains configured for installing, but does not do so automatically unless you specify the coreos.inst.install_dev kernel argument. 3 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Applying your customizations affects every subsequent boot of RHCOS. 3.14.3.8.1. Modifying a live install PXE environment to enable the serial console On clusters installed with OpenShift Container Platform 4.12 and above, the serial console is disabled by default and all output is written to the graphical console. You can enable the serial console with the following procedure. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and the Ignition config file, and then run the following command to create a new customized initramfs file that enables the serial console to receive output: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --dest-ignition <path> \ 1 --dest-console tty0 \ 2 --dest-console ttyS0,<options> \ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5 1 The location of the Ignition config to install. 2 The desired secondary console. In this case, the graphical console. Omitting this option will disable the graphical console. 3 The desired primary console. In this case, the serial console. The options field defines the baud rate and other settings. A common value for this field is 115200n8 . If no options are provided, the default kernel value of 9600n8 is used. For more information on the format of this option, see the Linux kernel serial console documentation. 4 The specified disk to install to. If you omit this option, the PXE environment automatically runs the installer which will fail unless you also specify the coreos.inst.install_dev kernel argument. 5 Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Your customizations are applied and affect every subsequent boot of the PXE environment. 3.14.3.8.2. Modifying a live install PXE environment to use a custom certificate authority You can provide certificate authority (CA) certificates to Ignition with the --ignition-ca flag of the customize subcommand. You can use the CA certificates during both the installation boot and when provisioning the installed system. Note Custom CA certificates affect how Ignition fetches remote resources but they do not affect the certificates installed onto the system. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file for use with a custom CA: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --ignition-ca cert.pem \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Important The coreos.inst.ignition_url kernel parameter does not work with the --ignition-ca flag. You must use the --dest-ignition flag to create a customized image for each cluster. Applying your custom CA certificate affects every subsequent boot of RHCOS. 3.14.3.8.3. Modifying a live install PXE environment with customized network settings You can embed a NetworkManager keyfile into the live PXE environment and pass it through to the installed system with the --network-keyfile flag of the customize subcommand. Warning When creating a connection profile, you must use a .nmconnection filename extension in the filename of the connection profile. If you do not use a .nmconnection filename extension, the cluster will apply the connection profile to the live environment, but it will not apply the configuration when the cluster first boots up the nodes, resulting in a setup that does not work. Procedure Download the coreos-installer binary from the coreos-installer image mirror page. Create a connection profile for a bonded interface. For example, create the bond0.nmconnection file in your local directory with the following content: [connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em1.nmconnection file in your local directory with the following content: [connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond Create a connection profile for a secondary interface to add to the bond. For example, create the bond0-proxy-em2.nmconnection file in your local directory with the following content: [connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond Retrieve the RHCOS kernel , initramfs and rootfs files from the RHCOS image mirror page and run the following command to create a new customized initramfs file that contains your configured networking: USD coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img \ --network-keyfile bond0.nmconnection \ --network-keyfile bond0-proxy-em1.nmconnection \ --network-keyfile bond0-proxy-em2.nmconnection \ -o rhcos-<version>-custom-initramfs.x86_64.img Use the customized initramfs file in your PXE configuration. Add the ignition.firstboot and ignition.platform.id=metal kernel arguments if they are not already present. Network settings are applied to the live system and are carried over to the destination system. 3.14.3.9. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 3.14.3.9.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 3.14.3.9.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 3.20. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 3.14.3.9.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 3.21. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 3.14.4. Enabling multipathing with kernel arguments on RHCOS RHCOS supports multipathing on the primary disk, allowing stronger resilience to hardware failure to achieve higher host availability. You can enable multipathing at installation time for nodes that were provisioned in OpenShift Container Platform 4.8 or later. While postinstallation support is available by activating multipathing via the machine config, enabling multipathing during installation is recommended. In setups where any I/O to non-optimized paths results in I/O system errors, you must enable multipathing at installation time. Important On IBM Z(R) and IBM(R) LinuxONE, you can enable multipathing only if you configured your cluster for it during installation. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process" in Installing a cluster with z/VM on IBM Z(R) and IBM(R) LinuxONE . The following procedure enables multipath at installation time and appends kernel arguments to the coreos-installer install command so that the installed system itself will use multipath beginning from the first boot. Note OpenShift Container Platform does not support enabling multipathing as a day-2 activity on nodes that have been upgraded from 4.6 or earlier. Prerequisites You have created the Ignition config files for your cluster. You have reviewed Installing RHCOS and starting the OpenShift Container Platform bootstrap process . Procedure To enable multipath and start the multipathd daemon, run the following command on the installation host: USD mpathconf --enable && systemctl start multipathd.service Optional: If booting the PXE or ISO, you can instead enable multipath by adding rd.multipath=default from the kernel command line. Append the kernel arguments by invoking the coreos-installer program: If there is only one multipath device connected to the machine, it should be available at path /dev/mapper/mpatha . For example: USD coreos-installer install /dev/mapper/mpatha \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the path of the single multipathed device. If there are multiple multipath devices connected to the machine, or to be more explicit, instead of using /dev/mapper/mpatha , it is recommended to use the World Wide Name (WWN) symlink available in /dev/disk/by-id . For example: USD coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \ 1 --ignition-url=http://host/worker.ign \ --append-karg rd.multipath=default \ --append-karg root=/dev/disk/by-label/dm-mpath-root \ --append-karg rw 1 Indicates the WWN ID of the target multipathed device. For example, 0xx194e957fcedb4841 . This symlink can also be used as the coreos.inst.install_dev kernel argument when using special coreos.inst.* arguments to direct the live installer. For more information, see "Installing RHCOS and starting the OpenShift Container Platform bootstrap process". Reboot into the installed system. Check that the kernel arguments worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline ... rd.multipath=default root=/dev/disk/by-label/dm-mpath-root ... sh-4.2# exit You should see the added kernel arguments. 3.14.4.1. Enabling multipathing on secondary disks RHCOS also supports multipathing on a secondary disk. Instead of kernel arguments, you use Ignition to enable multipathing for the secondary disk at installation time. Prerequisites You have read the section Disk partitioning . You have read Enabling multipathing with kernel arguments on RHCOS . You have installed the Butane utility. Procedure Create a Butane config with information similar to the following: Example multipath-config.bu variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target 1 The configuration must be set before launching the multipath daemon. 2 Starts the mpathconf utility. 3 This field must be set to the value true . 4 Creates the filesystem and directory /var/lib/containers . 5 The device must be mounted before starting any nodes. 6 Mounts the device to the /var/lib/containers mount point. This location cannot be a symlink. Create the Ignition configuration by running the following command: USD butane --pretty --strict multipath-config.bu > multipath-config.ign Continue with the rest of the first boot RHCOS installation process. Important Do not add the rd.multipath or root kernel arguments on the command-line during installation unless the primary disk is also multipathed. 3.15. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. Additional resources See Monitoring installation progress for more information about monitoring the installation logs and retrieving diagnostic data if installation issues arise. 3.16. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.17. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 3.18. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Configure the Operators that are not available. Additional resources See Gathering logs from a failed installation for details about gathering data in the event of a failed OpenShift Container Platform installation. See Troubleshooting Operator issues for steps to check Operator pod health across the cluster and gather Operator logs for diagnosis. 3.18.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.18.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.18.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 3.19. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 3.20. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.21. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". β”œβ”€β”€ auth β”‚ β”œβ”€β”€ kubeadmin-password β”‚ └── kubeconfig β”œβ”€β”€ bootstrap.ign β”œβ”€β”€ master.ign β”œβ”€β”€ metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.15/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.15.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". β”œβ”€β”€ auth β”‚ β”œβ”€β”€ kubeadmin-password β”‚ └── kubeconfig β”œβ”€β”€ bootstrap.ign β”œβ”€β”€ master.ign β”œβ”€β”€ metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "coreos-installer install --console=tty0 \\ 1 --console=ttyS0,<options> \\ 2 --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> 2", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> 4", "coreos-installer iso reset rhcos-<version>-live.x86_64.iso", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --ignition-ca cert.pem", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer iso customize rhcos-<version>-live.x86_64.iso --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition bootstrap.ign \\ 1 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 2 -o rhcos-<version>-custom-initramfs.x86_64.img 3", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --dest-ignition <path> \\ 1 --dest-console tty0 \\ 2 --dest-console ttyS0,<options> \\ 3 --dest-device /dev/disk/by-id/scsi-<serial_number> \\ 4 -o rhcos-<version>-custom-initramfs.x86_64.img 5", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --ignition-ca cert.pem -o rhcos-<version>-custom-initramfs.x86_64.img", "[connection] id=bond0 type=bond interface-name=bond0 multi-connect=1 [bond] miimon=100 mode=active-backup [ipv4] method=auto [ipv6] method=auto", "[connection] id=em1 type=ethernet interface-name=em1 master=bond0 multi-connect=1 slave-type=bond", "[connection] id=em2 type=ethernet interface-name=em2 master=bond0 multi-connect=1 slave-type=bond", "coreos-installer pxe customize rhcos-<version>-live-initramfs.x86_64.img --network-keyfile bond0.nmconnection --network-keyfile bond0-proxy-em1.nmconnection --network-keyfile bond0-proxy-em2.nmconnection -o rhcos-<version>-custom-initramfs.x86_64.img", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp", "bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "team=team0:em1,em2 ip=team0:dhcp", "mpathconf --enable && systemctl start multipathd.service", "coreos-installer install /dev/mapper/mpatha \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "coreos-installer install /dev/disk/by-id/wwn-<wwn_ID> \\ 1 --ignition-url=http://host/worker.ign --append-karg rd.multipath=default --append-karg root=/dev/disk/by-label/dm-mpath-root --append-karg rw", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline rd.multipath=default root=/dev/disk/by-label/dm-mpath-root sh-4.2# exit", "variant: openshift version: 4.15.0 systemd: units: - name: mpath-configure.service enabled: true contents: | [Unit] Description=Configure Multipath on Secondary Disk ConditionFirstBoot=true ConditionPathExists=!/etc/multipath.conf Before=multipathd.service 1 DefaultDependencies=no [Service] Type=oneshot ExecStart=/usr/sbin/mpathconf --enable 2 [Install] WantedBy=multi-user.target - name: mpath-var-lib-container.service enabled: true contents: | [Unit] Description=Set Up Multipath On /var/lib/containers ConditionFirstBoot=true 3 Requires=dev-mapper-mpatha.device After=dev-mapper-mpatha.device After=ostree-remount.service Before=kubelet.service DefaultDependencies=no [Service] 4 Type=oneshot ExecStart=/usr/sbin/mkfs.xfs -L containers -m reflink=1 /dev/mapper/mpatha ExecStart=/usr/bin/mkdir -p /var/lib/containers [Install] WantedBy=multi-user.target - name: var-lib-containers.mount enabled: true contents: | [Unit] Description=Mount /var/lib/containers After=mpath-var-lib-containers.service Before=kubelet.service 5 [Mount] 6 What=/dev/disk/by-label/dm-mpath-containers Where=/var/lib/containers Type=xfs [Install] WantedBy=multi-user.target", "butane --pretty --strict multipath-config.bu > multipath-config.ign", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.28.5 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.28.5 master-1 Ready master 63m v1.28.5 master-2 Ready master 64m v1.28.5", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.28.5 master-1 Ready master 73m v1.28.5 master-2 Ready master 74m v1.28.5 worker-0 Ready worker 11m v1.28.5 worker-1 Ready worker 11m v1.28.5", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 19m baremetal 4.15.0 True False False 37m cloud-credential 4.15.0 True False False 40m cluster-autoscaler 4.15.0 True False False 37m config-operator 4.15.0 True False False 38m console 4.15.0 True False False 26m csi-snapshot-controller 4.15.0 True False False 37m dns 4.15.0 True False False 37m etcd 4.15.0 True False False 36m image-registry 4.15.0 True False False 31m ingress 4.15.0 True False False 30m insights 4.15.0 True False False 31m kube-apiserver 4.15.0 True False False 26m kube-controller-manager 4.15.0 True False False 36m kube-scheduler 4.15.0 True False False 36m kube-storage-version-migrator 4.15.0 True False False 37m machine-api 4.15.0 True False False 29m machine-approver 4.15.0 True False False 37m machine-config 4.15.0 True False False 36m marketplace 4.15.0 True False False 37m monitoring 4.15.0 True False False 29m network 4.15.0 True False False 38m node-tuning 4.15.0 True False False 37m openshift-apiserver 4.15.0 True False False 32m openshift-controller-manager 4.15.0 True False False 30m openshift-samples 4.15.0 True False False 32m operator-lifecycle-manager 4.15.0 True False False 37m operator-lifecycle-manager-catalog 4.15.0 True False False 37m operator-lifecycle-manager-packageserver 4.15.0 True False False 32m service-ca 4.15.0 True False False 38m storage 4.15.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_bare_metal/installing-bare-metal-network-customizations
Chapter 4. Enabling Windows container workloads
Chapter 4. Enabling Windows container workloads Before adding Windows workloads to your cluster, you must install the Windows Machine Config Operator (WMCO), which is available in the OpenShift Container Platform OperatorHub. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. Note Dual NIC is not supported on WMCO-managed Windows instances. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed your cluster using installer-provisioned infrastructure, or using user-provisioned infrastructure with the platform: none field set in your install-config.yaml file. You have configured hybrid networking with OVN-Kubernetes for your cluster. This must be completed during the installation of your cluster. For more information, see Configuring hybrid networking . You are running an OpenShift Container Platform cluster version 4.6.8 or later. Note Windows instances deployed by the WMCO are configured with the containerd container runtime. Because WMCO installs and manages the runtime, it is recommanded that you do not manually install containerd on nodes. Additional resources For the comprehensive prerequisites for the Windows Machine Config Operator, see Understanding Windows container workloads . 4.1. Installing the Windows Machine Config Operator You can install the Windows Machine Config Operator using either the web console or OpenShift CLI ( oc ). Note The WMCO is not supported in clusters that use a cluster-wide proxy because the WMCO is not able to route traffic through the proxy connection for the workloads. 4.1.1. Installing the Windows Machine Config Operator using the web console You can use the OpenShift Container Platform web console to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure From the Administrator perspective in the OpenShift Container Platform web console, navigate to the Operators OperatorHub page. Use the Filter by keyword box to search for Windows Machine Config Operator in the catalog. Click the Windows Machine Config Operator tile. Review the information about the Operator and click Install . On the Install Operator page: Select the stable channel as the Update Channel . The stable channel enables the latest stable release of the WMCO to be installed. The Installation Mode is preconfigured because the WMCO must be available in a single namespace only. Choose the Installed Namespace for the WMCO. The default Operator recommended namespace is openshift-windows-machine-config-operator . Click the Enable Operator recommended cluster monitoring on the Namespace checkbox to enable cluster monitoring for the WMCO. Select an Approval Strategy . The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. The Manual strategy requires a user with appropriate credentials to approve the Operator update. Click Install . The WMCO is now listed on the Installed Operators page. Note The WMCO is installed automatically into the namespace you defined, like openshift-windows-machine-config-operator . Verify that the Status shows Succeeded to confirm successful installation of the WMCO. 4.1.2. Installing the Windows Machine Config Operator using the CLI You can use the OpenShift CLI ( oc ) to install the Windows Machine Config Operator (WMCO). Note Dual NIC is not supported on WMCO-managed Windows instances. Procedure Create a namespace for the WMCO. Create a Namespace object YAML file for the WMCO. For example, wmco-namespace.yaml : apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: "true" 2 1 It is recommended to deploy the WMCO in the openshift-windows-machine-config-operator namespace. 2 This label is required for enabling cluster monitoring for the WMCO. Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-namespace.yaml Create the Operator group for the WMCO. Create an OperatorGroup object YAML file. For example, wmco-og.yaml : apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator Create the Operator group: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-og.yaml Subscribe the namespace to the WMCO. Create a Subscription object YAML file. For example, wmco-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: "stable" 1 installPlanApproval: "Automatic" 2 name: "windows-machine-config-operator" source: "redhat-operators" 3 sourceNamespace: "openshift-marketplace" 4 1 Specify stable as the channel. 2 Set an approval strategy. You can set Automatic or Manual . 3 Specify the redhat-operators catalog source, which contains the windows-machine-config-operator package manifests. If your OpenShift Container Platform is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator LifeCycle Manager (OLM). 4 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f wmco-sub.yaml The WMCO is now installed to the openshift-windows-machine-config-operator . Verify the WMCO installation: USD oc get csv -n openshift-windows-machine-config-operator Example output NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded 4.2. Configuring a secret for the Windows Machine Config Operator To run the Windows Machine Config Operator (WMCO), you must create a secret in the WMCO namespace containing a private key. This is required to allow the WMCO to communicate with the Windows virtual machine (VM). Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You created a PEM-encoded file containing an RSA key. Procedure Define the secret required to access the Windows VMs: USD oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> \ -n openshift-windows-machine-config-operator 1 1 You must create the private key in the WMCO namespace, like openshift-windows-machine-config-operator . It is recommended to use a different private key than the one used when installing the cluster. 4.3. Additional resources Generating a key pair for cluster node SSH access Adding Operators to a cluster .
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-windows-machine-config-operator 1 labels: openshift.io/cluster-monitoring: \"true\" 2", "oc create -f <file-name>.yaml", "oc create -f wmco-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: targetNamespaces: - openshift-windows-machine-config-operator", "oc create -f <file-name>.yaml", "oc create -f wmco-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: windows-machine-config-operator namespace: openshift-windows-machine-config-operator spec: channel: \"stable\" 1 installPlanApproval: \"Automatic\" 2 name: \"windows-machine-config-operator\" source: \"redhat-operators\" 3 sourceNamespace: \"openshift-marketplace\" 4", "oc create -f <file-name>.yaml", "oc create -f wmco-sub.yaml", "oc get csv -n openshift-windows-machine-config-operator", "NAME DISPLAY VERSION REPLACES PHASE windows-machine-config-operator.2.0.0 Windows Machine Config Operator 2.0.0 Succeeded", "oc create secret generic cloud-private-key --from-file=private-key.pem=USD{HOME}/.ssh/<key> -n openshift-windows-machine-config-operator 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/windows_container_support_for_openshift/enabling-windows-container-workloads
8.105. logrotate
8.105. logrotate 8.105.1. RHBA-2013:1095 - logrotate bug fix update Updated logrotate packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The logrotate utility simplifies the administration of multiple log files, allowing the automatic rotation, compression, removal, and mailing of log files. Bug Fixes BZ# 841520 The logrotate utility always tried to set owner of the rotated log even when the owner was the same as the current owner of the log file. Conseqeuntly, the rotation failed on file systems or systems where changing the ownership was not supported. With this update, before the ownership is changed, logrotate check if it is a real ownership change; that is, logrotate verifies if the new ownership is not the same as the one, and skips the change if the ownership change has not been real. The logrotate utility now rotates logs as expected in this scenario. BZ# 847338 Setting the Access control list (ACL) on a rotated log overwrote the previously set mode of the log file. As a consequence, the "create" directive was ignored. To fix this bug, the ACL is no longer copied from the old log file when using the "create" directive and the mode defined using the "create" directive is used instead. As a result, "create" mode works as expected and it is no longer ignored in the described scenario. BZ# 847339 Both the acl_set_fd() and fchmod() functions were called to set the log files permissions. Consequently, there was a race condition where the log file could have unsafe permissions for a short time during its creation. With this update, only one of those functions is now called depending on directives combination used in the configuration file and race condition between the acl_set_fd() and fchmod() function is not possible in the described scenario. BZ# 848131 Because the inverse umask value 0000 was used when creating a new log file, the newly created log file could have unwanted 0600 permissions for a short time before the permissions were set to the proper value using the fchmod() function. With this update, umask is set to 0777 and the newly created log file has proper 0000 permissions for this short period. BZ#920030 The default SELinux context was set after the compressed log file had been created. Consequently, the compressed log did not have the proper SELinux context. With this update, the default SELinux context is now set before the compressed log file creation and compressed log files have proper SELinux context. BZ#922169 Temporary files created by the logrotate utility were not removed if an error occurred during its use. With this update, temporary files are now removed in such a case. Users of logrotate are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/logrotate
Data Security and Hardening Guide
Data Security and Hardening Guide Red Hat Ceph Storage 8 Red Hat Ceph Storage Data Security and Hardening Guide Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/data_security_and_hardening_guide/index
Chapter 4. Configuring CPUs on Compute nodes
Chapter 4. Configuring CPUs on Compute nodes Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. As a cloud administrator, you can configure the scheduling and placement of instances for optimal performance by creating customized flavors to target specialized workloads, including NFV and High Performance Computing (HPC). Use the following features to tune your instances for optimal CPU performance: CPU pinning : Pin virtual CPUs to physical CPUs. Emulator threads : Pin emulator threads associated with the instance to physical CPUs. CPU feature flags : Configure the standard set of CPU feature flags that are applied to instances to improve live migration compatibility across Compute nodes. 4.1. Configuring CPU pinning on Compute nodes You can configure each instance CPU process to run on a dedicated host CPU by enabling CPU pinning on the Compute nodes. When an instance uses CPU pinning, each instance vCPU process is allocated its own host pCPU that no other instance vCPU process can use. Instances that run on Compute nodes with CPU pinning enabled have a NUMA topology. Each NUMA node of the instance NUMA topology maps to a NUMA node on the host Compute node. You can configure the Compute scheduler to schedule instances with dedicated (pinned) CPUs and instances with shared (floating) CPUs on the same Compute node. To configure CPU pinning on Compute nodes that have a NUMA topology, you must complete the following: Designate Compute nodes for CPU pinning. Configure the Compute nodes to reserve host cores for pinned instance vCPU processes, floating instance vCPU processes, and host processes. Deploy the data plane. Create a flavor for launching instances that require CPU pinning. Create a flavor for launching instances that use shared, or floating, CPUs. Note Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts. 4.1.1. Prerequisites You know the NUMA topology of your Compute node. The oc command line tool is installed on your workstation. You are logged in to Red Hat OpenStack Services on OpenShift (RHOSO) as a user with cluster-admin privileges. 4.1.2. Designating and configuring Compute nodes for CPU pinning To designate Compute nodes for instances with pinned CPUs, you must create and configure a new OpenStackDataPlaneNodeSet custom resource (CR) to configure the nodes that are designated for CPU pinning. Configure CPU pinning on your Compute nodes based on the NUMA topology of the nodes. Reserve some CPU cores across all the NUMA nodes for the host processes for efficiency. Assign the remaining CPU cores to managing your instances. This procedure uses the following NUMA topology, with eight CPU cores spread across two NUMA nodes, to illustrate how to configure CPU pinning: Table 4.1. Example of NUMA Topology NUMA Node 0 NUMA Node 1 Core 0 Core 1 Core 4 Core 5 Core 2 Core 3 Core 6 Core 7 The procedure reserves cores 0 and 4 for host processes, cores 1, 3, 5 and 7 for instances that require CPU pinning, and cores 2 and 6 for floating instances that do not require CPU pinning. Note The following procedure applies to new OpenStackDataPlaneNodeSet CRs that have not yet been provisioned. To reconfigure an existing OpenStackDataPlaneNodeSet that has already been provisioned, you must first drain the guest instances from all the nodes in the OpenStackDataPlaneNodeSet . Note Configuring CPU pinning creates an implicit NUMA topology on the instance even if a NUMA topology is not requested. Do not run NUMA and non-NUMA virtual machines (VMs) on the same hosts. Prerequisites You have selected the OpenStackDataPlaneNodeSet CR that defines the nodes for which you want to designate and configure CPU pinning. For more information about creating an OpenStackDataPlaneNodeSet CR, see Creating the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide. Procedure Create or update the ConfigMap CR named nova-extra-config.yaml and set the values of the parameters under [compute] and [default]: 1 The name of the new Compute configuration file. The nova-operator generates the default configuration file with the name 01-nova.conf . Do not use the default name, because it would override the infrastructure configuration, such as the transport_url . The nova-compute service applies every file under /etc/nova/nova.conf.d/ in lexicographical order, therefore configurations defined in later files override the same configurations defined in an earlier file. 2 Reserves physical CPU cores for the shared instances. 3 Reserves physical CPU cores for the dedicated instances. 4 Specifies the amount memory to reserve per NUMA node. For more information about creating ConfigMap objects, see Creating and using config maps . Create a new OpenStackDataPlaneDeployment CR to configure the services on the data plane nodes and deploy the data plane, and save it to a file named compute_cpu_pinning_deploy.yaml on your workstation: For more information about creating an OpenStackDataPlaneDeployment CR, see Deploying the data plane in the Deploying Red Hat OpenStack Services on OpenShift guide. In the compute_cpu_pinning_deploy.yaml , specify nodeSets to include all the OpenStackDataPlaneNodeSet CRs you want to deploy. Ensure that you include the OpenStackDataPlaneNodeSet CR that you selected as a prerequisite. That OpenStackDataPlaneNodeSet CR defines the nodes you want to designate for CPU pinning. Warning You can configure only whole node sets. Reconfiguring a subset of the nodes within a node set is not supported. If you need to reconfigure a subset of nodes within a node set, you must scale the node set down, and create a new node set from the previously removed nodes. Warning If your deployment has more than one node set, changes to the nova-extra-config.yaml ConfigMap might directly affect more than one node set, depending on how the node sets and the DataPlaneServices are configured. To check if a node set uses the nova-extra-config ConfigMap and therefore will be affected by the reconfiguration, complete the following steps: Check the services list of the node set and find the name of the DataPlaneService that points to nova. Ensure that the value of the edpmServiceType field of the DataPlaneService is set to nova . If the dataSources list of the DataPlaneService contains a configMapRef named nova-extra-config , then this node set uses this ConfigMap and therefore will be affected by the configuration changes in this ConfigMap . If some of the node sets that are affected should not be reconfigured, you must create a new DataPlaneService pointing to a separate ConfigMap for these node sets. Replace <nodeSet_name> with the names of the OpenStackDataPlaneNodeSet CRs that you want to include in your data plane deployment. Save the compute_cpu_pinning_deploy.yaml deployment file. Deploy the data plane: Verify that the data plane is deployed: Access the remote shell for openstackclient and verify that the deployed Compute nodes are visible on the control plane: 4.1.3. Creating a dedicated CPU flavor for instances To enable your cloud users to create instances that have dedicated CPUs, you can create a flavor with a dedicated CPU policy for launching instances. Prerequisites Simultaneous multithreading (SMT) is configured on the host if you intend to use the required cpu_thread_policy . You can have a mix of SMT and non-SMT Compute hosts. Flavors with the require cpu_thread_policy will land on SMT hosts, and flavors with isolate will land on non-SMT. The Compute node is configured to allow CPU pinning. For more information, see Configuring CPU pinning on the Compute nodes . Procedure Create a flavor for instances that require CPU pinning: If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. Note To set hw:mem_page_size to small or any , you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances. To request pinned CPUs, set the hw:cpu_policy property of the flavor to dedicated : Optional: To place each vCPU on thread siblings, set the hw:cpu_thread_policy property of the flavor to require : Note If the host does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling fails. To prevent this, set hw:cpu_thread_policy to prefer instead of require . The prefer policy is the default policy that ensures that thread siblings are used when available. If you use hw:cpu_thread_policy=isolate , you must have SMT disabled or use a platform that does not support SMT. To verify the flavor creates an instance with dedicated CPUs, use your new flavor to launch an instance: 4.1.4. Creating a shared CPU flavor for instances To enable your cloud users to create instances that use shared, or floating, CPUs, you can create a flavor with a shared CPU policy for launching instances. Prerequisites The Compute node is configured to reserve physical CPU cores for the shared CPUs. For more information, see Configuring CPU pinning on the Compute nodes . Procedure Create a flavor for instances that do not require CPU pinning: To request floating CPUs, set the hw:cpu_policy property of the flavor to shared : 4.1.5. Creating a mixed CPU flavor for instances To enable your cloud users to create instances that have a mix of dedicated and shared CPUs, you can create a flavor with a mixed CPU policy for launching instances. Procedure Create a flavor for instances that require a mix of dedicated and shared CPUs: Specify which CPUs must be dedicated or shared: Replace <CPU_MASK> with the CPUs that must be either dedicated or shared: To specify dedicated CPUs, specify the CPU number or CPU range. For example, set the property to 2-3 to specify that CPUs 2 and 3 are dedicated and all the remaining CPUs are shared. To specify shared CPUs, prepend the CPU number or CPU range with a caret (^). For example, set the property to ^0-1 to specify that CPUs 0 and 1 are shared and all the remaining CPUs are dedicated. If you are not using file-backed memory, set the hw:mem_page_size property of the flavor to enable NUMA-aware memory allocation: Replace <page_size> with one of the following valid values: large : Selects the largest page size supported on the host, which may be 2 MB or 1 GB on x86_64 systems. small : (Default) Selects the smallest page size supported on the host. On x86_64 systems this is 4 kB (normal pages). any : Selects the page size by using the hw_mem_page_size set on the image. If the page size is not specified by the image, selects the largest available page size, as determined by the libvirt driver. <pagesize> : Set an explicit page size if the workload has specific requirements. Use an integer value for the page size in KB, or any standard suffix. For example: 4KB, 2MB, 2048, 1GB. Note To set hw:mem_page_size to small or any , you must have configured the amount of memory pages to reserve on each NUMA node for processes that are not instances. 4.1.6. Configuring CPU pinning on Compute nodes with simultaneous multithreading (SMT) If a Compute node supports simultaneous multithreading (SMT), group thread siblings together in either the dedicated or the shared set. Thread siblings share some common hardware which means it is possible for a process running on one thread sibling to impact the performance of the other thread sibling. For example, the host identifies four logical CPU cores in a dual core CPU with SMT: 0, 1, 2, and 3. Of these four, there are two pairs of thread siblings: Thread sibling 1: logical CPU cores 0 and 2 Thread sibling 2: logical CPU cores 1 and 3 In this scenario, do not assign logical CPU cores 0 and 1 as dedicated and 2 and 3 as shared. Instead, assign 0 and 2 as dedicated and 1 and 3 as shared. The files /sys/devices/system/cpu/cpuN/topology/thread_siblings_list , where N is the logical CPU number, contain the thread pairs. You can use the following command to identify which logical CPU cores are thread siblings: The following output indicates that logical CPU core 0 and logical CPU core 2 are threads on the same core: 4.1.7. Additional resources Discovering your NUMA node topology
[ "apiVersion: v1 kind: ConfigMap metadata: name: nova-extra-config namespace: openstack data: 25-nova-cpu-pinning.conf: | 1 [compute] cpu_shared_set = 2,6 2 cpu_dedicated_set = 1,3,5,7 3 [DEFAULT] reserved_huge_pages = node:0,size:4,count:131072 4 reserved_huge_pages = node:1,size:4,count:131072", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-cpu-pinning", "apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-cpu-pinning spec: nodeSets: - openstack-edpm - compute-cpu-pinning - - <nodeSet_name>", "oc create -f compute_cpu_pinning_deploy.yaml", "oc get openstackdataplanenodeset NAME STATUS MESSAGE compute-cpu-pinning True Deployed", "oc rsh -n openstack openstackclient openstack hypervisor list", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <num_guest_vcpus> pinned_cpus", "openstack --os-compute-api=2.86 flavor set --property hw:mem_page_size=<page_size> pinned_cpus", "openstack --os-compute-api=2.86 flavor set --property hw:cpu_policy=dedicated pinned_cpus", "openstack --os-compute-api=2.86 flavor set --property hw:cpu_thread_policy=require pinned_cpus", "openstack server create --flavor pinned_cpus --image <image> pinned_cpu_instance", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> floating_cpus", "openstack --os-compute-api=2.86 flavor set --property hw:cpu_policy=shared floating_cpus", "openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <number_of_reserved_vcpus> --property hw:cpu_policy=mixed mixed_CPUs_flavor", "openstack --os-compute-api=2.86 flavor set --property hw:cpu_dedicated_mask=<CPU_MASK> mixed_CPUs_flavor", "openstack --os-compute-api=2.86 flavor set --property hw:mem_page_size=<page_size> mixed_CPUs_flavor", "grep -H . /sys/devices/system/cpu/cpu*/topology/thread_siblings_list | sort -n -t ':' -k 2 -u", "/sys/devices/system/cpu/cpu0/topology/thread_siblings_list:0,2 /sys/devices/system/cpu/cpu2/topology/thread_siblings_list:1,3" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-cpus-on-compute-nodes
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/snip-conscious-language_installing-and-configuring
Chapter 2. Installation
Chapter 2. Installation This chapter guides you through the steps to install Red Hat build of Apache Qpid ProtonJ2 in your environment. 2.1. Prerequisites You must have a subscription to access AMQ release files and repositories. To build programs with Red Hat build of Apache Qpid ProtonJ2, you must install Apache Maven . To use Red Hat build of Apache Qpid ProtonJ2, you must install Java. 2.2. Installing on Red Hat Enterprise Linux 2.3. Using the Red Hat Maven repository Procedure Configure your Maven environment to download the client library from the Red Hat Maven repository. Procedure Add the Red Hat repository to your Maven settings or POM file. For example configuration files, see Section B.1, "Using the online repository" . <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> Add the library dependency to your POM file. <dependency> <groupId>org.apache.qpid</groupId> <artifactId>protonj2-client</artifactId> <version>1.0.0.M18-redhat-00002</version> </dependency> The client is now available in your Maven project. 2.4. Installing a local Maven repository As an alternative to the online repository, Red Hat build of Apache Qpid ProtonJ2 can be installed to your local filesystem as a file-based Maven repository. Procedure Use your subscription to download the amq-qpid-protonj2-1.0.0.M18 Maven repository .zip file. Extract the file contents into a directory of your choosing. On Linux or UNIX, use the unzip command to extract the file contents. USD unzip amq-qpid-protonj2-1.0.0.M18-maven-repository.zip On Windows, right-click the .zip file and select Extract All . Configure Maven to use the repository in the maven-repository directory inside the extracted install directory. For more information, see Section B.2, "Using a local repository" . 2.5. Installing the examples Use the git clone command to clone the source repository to a local directory named qpid-protonj2 : USD git clone https://github.com/apache/qpid-protonj2.git Change to the qpid-protonj2 directory and use the git checkout command to checkout the commit associated with this release: USD cd qpid-protonj2 USD git checkout 1.0.0-M18 USD cd protonj2-client-examples The resulting local directory is referred to as <source-dir> in this guide.
[ "<repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository>", "<dependency> <groupId>org.apache.qpid</groupId> <artifactId>protonj2-client</artifactId> <version>1.0.0.M18-redhat-00002</version> </dependency>", "unzip amq-qpid-protonj2-1.0.0.M18-maven-repository.zip", "git clone https://github.com/apache/qpid-protonj2.git", "cd qpid-protonj2 git checkout 1.0.0-M18 cd protonj2-client-examples" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_protonj2/1.0/html/using_qpid_protonj2/installation
Sandboxed Containers Support for OpenShift
Sandboxed Containers Support for OpenShift OpenShift Container Platform 4.12 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/sandboxed_containers_support_for_openshift/index
22.2. Creating Host-Based Access Control Entries for Services and Service Groups
22.2. Creating Host-Based Access Control Entries for Services and Service Groups Any PAM service can be identified as to the host-based access control (HBAC) system in IdM. The service entries used in host-based access control are separate from adding a service to the IdM domain. Adding a service to the domain makes it a recognized resource which is available to other resources. Adding a domain resource to the host-based access control configuration allows administrators to exert defined control over what domain users and what domain clients can access that service. Some common services are already configured as HBAC services, so they can be used in host-based access control rules. Additional services can be added, and services can be added into service groups for simpler management. 22.2.1. Adding HBAC Services 22.2.1.1. Adding HBAC Services in the Web UI Click the Policy tab. Click the Host-Based Access Control subtab, and then select the HBAC Services link. Click the Add link at the top of the list of services. Enter the service name and a description. Click the Add button to save the new service. If a service group already exists, then add the service to the desired group, as described in Section 22.2.2.1, "Adding Service Groups in the Web UI" . 22.2.1.2. Adding Services in the Command Line The service is added to the access control system using the hbacsvc-add command, specifying the service by the name that PAM uses to evaluate the service. For example, this adds the tftp service: If a service group already exists, then the service can be added to the group using the hbacsvcgroup-add-member command, as in Section 22.2.2.2, "Adding Service Groups in the Command Line" . 22.2.2. Adding Service Groups Once the individual service is added, it can be added to the access control rule. However, if there is a large number of services, then it can require frequent updates to the access control rules as services change. Identity Management also allows groups of services to be added to access control rules. This makes it much easier to manage access control, because the members of the service group can be changed without having to edit the rule itself. 22.2.2.1. Adding Service Groups in the Web UI Click the Policy tab. Click the Host-Based Access Control subtab, and then select the HBAC Service Groups link. Click the Add link at the top of the list of service groups. Enter the service group name and a description. Click the Add and Edit button to go immediately to the service group configuration page. At the top of the HBAC Services tab, click the Add link. Click the checkbox by the names of the services to add, and click the right arrows button, >> , to move the command to the selection box. Click the Add button to save the group membership. 22.2.2.2. Adding Service Groups in the Command Line First create the service group entry, then create the service, and then add that service to the service group as a member. For example: Note IdM defines two default service groups: SUDO for sudo services and FTP for services which provide FTP access.
[ "ipa hbacsvc-add --desc=\"TFTP service\" tftp ------------------------- Added HBAC service \"tftp\" ------------------------- Service name: tftp Description: TFTP service", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa hbacsvcgroup-add --desc=\"login services\" login -------------------------------- Added HBAC service group \"login\" -------------------------------- Service group name: login Description: login services [jsmith@server ~]USD ipa hbacsvc-add --desc=\"SSHD service\" sshd ------------------------- Added HBAC service \"sshd\" ------------------------- Service name: sshd Description: SSHD service [jsmith@server ~]USD ipa hbacsvcgroup-add-member --hbacsvcs=sshd login Service group name: login Description: login services ------------------------- Number of members added 1 -------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/HBAC_Service_Groups
Chapter 2. Setting up to Manage Application Versions
Chapter 2. Setting up to Manage Application Versions Effective version control is essential to all multi-developer projects. Red Hat Enterprise Linux is distributed with Git , a distributed version control system. Select the Development Tools Add-on during the system installation to install Git . Alternatively, install the git package from the Red Hat Enterprise Linux repositories after the system is installed. To get the latest version of Git supported by Red Hat, install the rh-git227 component from Red Hat Software Collections. Set the full name and email address associated with your Git commits: Replace full name and email_address with your actual name and email address. To change the default text editor started by Git , set the value of the core.editor configuration option: Replace command with the command to be used to start the selected text editor. Additional Resources Chapter 11, Using Git
[ "yum install git", "yum install rh-git227", "git config --global user.name \" full name \" git config --global user.email \" email_address \"", "git config --global core.editor command" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/developer_guide/setting-up_setup-managing-versions
8.2. Configuring NFS Client
8.2. Configuring NFS Client The mount command mounts NFS shares on the client side. Its format is as follows: This command uses the following variables: options A comma-delimited list of mount options; for more information on valid NFS mount options, see Section 8.4, "Common NFS Mount Options" . server The hostname, IP address, or fully qualified domain name of the server exporting the file system you wish to mount /remote/export The file system or directory being exported from the server , that is, the directory you wish to mount /local/directory The client location where /remote/export is mounted The NFS protocol version used in Red Hat Enterprise Linux 7 is identified by the mount options nfsvers or vers . By default, mount uses NFSv4 with mount -t nfs . If the server does not support NFSv4, the client automatically steps down to a version supported by the server. If the nfsvers / vers option is used to pass a particular version not supported by the server, the mount fails. The file system type nfs4 is also available for legacy reasons; this is equivalent to running mount -t nfs -o nfsvers=4 host : /remote/export /local/directory . For more information, see man mount . If an NFS share was mounted manually, the share will not be automatically mounted upon reboot. Red Hat Enterprise Linux offers two methods for mounting remote file systems automatically at boot time: the /etc/fstab file and the autofs service. For more information, see Section 8.2.1, "Mounting NFS File Systems Using /etc/fstab " and Section 8.3, " autofs " . 8.2.1. Mounting NFS File Systems Using /etc/fstab An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab file. Example 8.1. Syntax Example The general syntax for the line in /etc/fstab is as follows: The mount point /pub must exist on the client machine before this command can be executed. After adding this line to /etc/fstab on the client system, use the command mount /pub , and the mount point /pub is mounted from the server. A valid /etc/fstab entry to mount an NFS export should contain the following information: The variables server , /remote/export , /local/directory , and options are the same ones used when manually mounting an NFS share. For more information, see Section 8.2, "Configuring NFS Client" . Note The mount point /local/directory must exist on the client before /etc/fstab is read. Otherwise, the mount fails. After editing /etc/fstab , regenerate mount units so that your system registers the new configuration: Additional Resources For more information about /etc/fstab , refer to man fstab .
[ "mount -t nfs -o options server : /remote/export /local/directory", "server:/usr/local/pub /pub nfs defaults 0 0", "server : /remote/export /local/directory nfs options 0 0", "systemctl daemon-reload" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/nfs-clientconfig
Managing certificates in IdM
Managing certificates in IdM Red Hat Enterprise Linux 9 Issuing certificates, configuring certificate-based authentication, and controlling certificate validity Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_certificates_in_idm/index
Chapter 4. Resolved issues and known issues
Chapter 4. Resolved issues and known issues 4.1. Resolved issues See Resolved issues for JBoss EAP XP 4.0.0 to view the list of issues that have been resolved for this release. 4.2. Known issues See Known issues for JBoss EAP XP 4.0.0 to view the list of known issues for this release.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/red_hat_jboss_eap_xp_4.0.0_release_notes/resolved_issues_and_known_issues
4.307. strace
4.307. strace 4.307.1. RHBA-2012:0028 - strace bug fix update An updated strace package that fixes one bug is now available for Red Hat Enterprise Linux 6. The strace program intercepts and records the system calls called and received by a running process. It can print a record of each system call, its arguments and its return value. The strace utility is useful for diagnosing, debugging and instructional purposes. Bug Fix BZ# 772569 The strace utility did not properly track switches between 32-bit and 64-bit process execution domains (so called "personalities") when tracing multiple processes with multiple "personalities". This caused strace to output the wrong system call names and arguments for the traced processes. This update corrects personality tracking in strace so that it now prints system call names and arguments as expected. All users of strace are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/strace
Chapter 14. Configuring Multi-Site Clusters with Pacemaker
Chapter 14. Configuring Multi-Site Clusters with Pacemaker When a cluster spans more than one site, issues with network connectivity between the sites can lead to split-brain situations. When connectivity drops, there is no way for a node on one site to determine whether a node on another site has failed or is still functioning with a failed site interlink. In addition, it can be problematic to provide high availability services across two sites which are too far apart to keep synchronous. To address these issues, Red Hat Enterprise Linux release 7.4 provides full support for the ability to configure high availability clusters that span multiple sites through the use of a Booth cluster ticket manager. The Booth ticket manager is a distributed service that is meant to be run on a different physical network than the networks that connect the cluster nodes at particular sites. It yields another, loose cluster, a Booth formation , that sits on top of the regular clusters at the sites. This aggregated communication layer facilitates consensus-based decision processes for individual Booth tickets. A Booth ticket is a singleton in the Booth formation and represents a time-sensitive, movable unit of authorization. Resources can be configured to require a certain ticket to run. This can ensure that resources are run at only one site at a time, for which a ticket or tickets have been granted. You can think of a Booth formation as an overlay cluster consisting of clusters running at different sites, where all the original clusters are independent of each other. It is the Booth service which communicates to the clusters whether they have been granted a ticket, and it is Pacemaker that determines whether to run resources in a cluster based on a Pacemaker ticket constraint. This means that when using the ticket manager, each of the clusters can run its own resources as well as shared resources. For example there can be resources A, B and C running only in one cluster, resources D, E, and F running only in the other cluster, and resources G and H running in either of the two clusters as determined by a ticket. It is also possible to have an additional resource J that could run in either of the two clusters as determined by a separate ticket. The following procedure provides an outline of the steps you follow to configure a multi-site configuration that uses the Booth ticket manager. These example commands use the following arrangement: Cluster 1 consists of the nodes cluster1-node1 and cluster1-node2 Cluster 1 has a floating IP address assigned to it of 192.168.11.100 Cluster 2 consists of cluster2-node1 and cluster2-node2 Cluster 2 has a floating IP address assigned to it of 192.168.22.100 The arbitrator node is arbitrator-node with an ip address of 192.168.99.100 The name of the Booth ticket that this configuration uses is apacheticket These example commands assume that the cluster resources for an Apache service have been configured as part of the resource group apachegroup for each cluster. It is not required that the resources and resource groups be the same on each cluster to configure a ticket constraint for those resources, since the Pacemaker instance for each cluster is independent, but that is a common failover scenario. For a full cluster configuration procedure that configures an Apache service in a cluster, see the example in High Availability Add-On Administration . Note that at any time in the configuration procedure you can enter the pcs booth config command to display the booth configuration for the current node or cluster or the pcs booth status command to display the current status of booth on the local node. Install the booth-site Booth ticket manager package on each node of both clusters. Install the pcs , booth-core , and booth-arbitrator packages on the arbitrator node. Ensure that ports 9929/tcp and 9929/udp are open on all cluster nodes and on the arbitrator node. For example, running the following commands on all nodes in both clusters as well as on the arbitrator node allows access to ports 9929/tcp and 9929/udp on those nodes. Note that this procedure in itself allows any machine anywhere to access port 9929 on the nodes. You should ensure that on your site the nodes are open only to the nodes that require them. Create a Booth configuration on one node of one cluster. The addresses you specify for each cluster and for the arbitrator must be IP addresses. For each cluster, you specify a floating IP address. This command creates the configuration files /etc/booth/booth.conf and /etc/booth/booth.key on the node from which it is run. Create a ticket for the Booth configuration. This is the ticket that you will use to define the resource constraint that will allow resources to run only when this ticket has been granted to the cluster. This basic failover configuration procedure uses only one ticket, but you can create additional tickets for more complicated scenarios where each ticket is associated with a different resource or resources. Synchronize the Booth configuration to all nodes in the current cluster. From the arbitrator node, pull the Booth configuration to the arbitrator. If you have not previously done so, you must first authenticate pcs to the node from which you are pulling the configuration. Pull the Booth configuration to the other cluster and synchronize to all the nodes of that cluster. As with the arbitrator node, if you have not previously done so, you must first authenticate pcs to the node from which you are pulling the configuration. Start and enable Booth on the arbitrator. Note You must not manually start or enable Booth on any of the nodes of the clusters since Booth runs as a Pacemaker resource in those clusters. Configure Booth to run as a cluster resource on both cluster sites. This creates a resource group with booth-ip and booth-service as members of that group. Add a ticket constraint to the resource group you have defined for each cluster. You can enter the following command to display the currently configured ticket constraints. Grant the ticket you created for this setup to the first cluster. Note that it is not necessary to have defined ticket constraints before granting a ticket. Once you have initially granted a ticket to a cluster, then Booth takes over ticket management unless you override this manually with the pcs booth ticket revoke command. For information on the pcs booth administration commands, see the PCS help screen for the pcs booth command. It is possible to add or remove tickets at any time, even after completing this procedure. After adding or removing a ticket, however, you must synchronize the configuration files to the other nodes and clusters as well as to the arbitrator and grant the ticket as is shown in this procedure. For information on additional Booth administration commands that you can use for cleaning up and removing Booth configuration files, tickets, and resources, see the PCS help screen for the pcs booth command.
[ "yum install -y booth-site yum install -y booth-site yum install -y booth-site yum install -y booth-site", "yum install -y pcs booth-core booth-arbitrator", "firewall-cmd --add-port=9929/udp firewall-cmd --add-port=9929/tcp firewall-cmd --add-port=9929/udp --permanent firewall-cmd --add-port=9929/tcp --permanent", "pcs booth setup sites 192.168.11.100 192.168.22.100 arbitrators 192.168.99.100", "pcs booth ticket add apacheticket", "pcs booth sync", "pcs cluster auth cluster1-node1 pcs booth pull cluster1-node1", "pcs cluster auth cluster1-node1 pcs booth pull cluster1-node1 pcs booth sync", "pcs booth start pcs booth enable", "pcs booth create ip 192.168.11.100 pcs booth create ip 192.168.22.100", "pcs constraint ticket add apacheticket apachegroup pcs constraint ticket add apacheticket apachegroup", "pcs constraint ticket [show]", "pcs booth ticket grant apacheticket" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-multisite-haar
Metrics Store Installation Guide
Metrics Store Installation Guide Red Hat Virtualization 4.3 Installing Metrics Store for Red Hat Virtualization Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract A comprehensive guide to installing and configuring Metrics Store for Red Hat Virtualization. Metrics Store collects logs and metrics for Red Hat Virtualization 4.2 and later.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/metrics_store_installation_guide/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/monitoring_openshift_data_foundation/making-open-source-more-inclusive
Chapter 77. Language
Chapter 77. Language Only producer is supported The Language component allows you to send Exchange to an endpoint which executes a script by any of the supported Languages in Camel. By having a component to execute language scripts, it allows more dynamic routing capabilities. For example by using the Routing Slip or Dynamic Router EIPs you can send messages to language endpoints where the script is dynamic defined as well. This component is provided out of the box in camel-core and hence no additional JARs is needed. You only have to include additional Camel components if the language of choice mandates it, such as using Groovy or JavaScript languages. 77.1. Dependencies When using language with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-language-starter</artifactId> </dependency> 77.2. URI format You can refer to an external resource for the script using same notation as supported by the other Languages in Camel. 77.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 77.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 77.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 77.4. Component Options The Language component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 77.5. Endpoint Options The Language endpoint is configured using URI syntax: with the following path and query parameters: 77.5.1. Path Parameters (2 parameters) Name Description Default Type languageName (producer) Required Sets the name of the language to use. Enum values: bean constant exchangeProperty file groovy header javascript jsonpath mvel ognl ref simple spel sql terser tokenize xpath xquery xtokenize String resourceUri (producer) Path to the resource, or a reference to lookup a bean in the Registry to use as the resource. String 77.5.2. Query Parameters (7 parameters) Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean binary (producer) Whether the script is binary content or text content. By default the script is read as text content (eg java.lang.String). false boolean cacheScript (producer) Whether to cache the compiled script and reuse Notice reusing the script can cause side effects from processing one Camel org.apache.camel.Exchange to the org.apache.camel.Exchange. false boolean contentCache (producer) Sets whether to use resource content cache or not. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean script (producer) Sets the script to execute. String transform (producer) Whether or not the result of the script should be used as message body. This options is default true. true boolean 77.6. Message Headers The following message headers can be used to affect the behavior of the component Header Description CamelLanguageScript The script to execute provided in the header. Takes precedence over script configured on the endpoint. 77.7. Examples For example you can use the Simple language to Message Translator a message. You can also provide the script as a header as shown below. Here we use XPath language to extract the text from the <foo> tag. Object out = producer.requestBodyAndHeader("language:xpath", "<foo>Hello World</foo>", Exchange.LANGUAGE_SCRIPT, "/foo/text()"); assertEquals("Hello World", out); 77.8. Loading scripts from resources You can specify a resource uri for a script to load in either the endpoint uri, or in the Exchange.LANGUAGE_SCRIPT header. The uri must start with one of the following schemes: file:, classpath:, or http: By default the script is loaded once and cached. However you can disable the contentCache option and have the script loaded on each evaluation. For example if the file myscript.txt is changed on disk, then the updated script is used: You can refer to the resource similar to the other Languages in Camel by prefixing with "resource:" as shown below. 77.9. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.language.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.language.enabled Whether to enable auto configuration of the language component. This is enabled by default. Boolean camel.component.language.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-language-starter</artifactId> </dependency>", "language://languageName[:script][?options]", "language://languageName:resource:scheme:location][?options]", "language:languageName:resourceUri", "Object out = producer.requestBodyAndHeader(\"language:xpath\", \"<foo>Hello World</foo>\", Exchange.LANGUAGE_SCRIPT, \"/foo/text()\"); assertEquals(\"Hello World\", out);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-language-component-starter
probe::netdev.open
probe::netdev.open Name probe::netdev.open - Called when the device is opened Synopsis netdev.open Values dev_name The device that is going to be opened
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-netdev-open
Chapter 17. Bridging brokers
Chapter 17. Bridging brokers Bridges provide a method to connect two brokers, forwarding messages from one to the other. The following bridges are available: Core An example is provided that demonstrates a core bridge deployed on one broker, which consumes messages from a local queue and forwards them to an address on a second broker. See the core-bridge example that is located in the <install_dir> /examples/features/standard/ directory of your broker installation. Mirror See Chapter 16, Configuring a multi-site, fault-tolerant messaging system using broker connections Sender and receiver See Section 17.1, "Sender and receiver configurations for broker connections" Peer See Section 17.2, "Peer configurations for broker connections" Note The broker.xml element for Core bridges is bridge . The other bridging techniques use the <broker-connection> element. 17.1. Sender and receiver configurations for broker connections It is possible to connect a broker to another broker by creating a sender or receiver broker connection element in the <broker-connections> section of broker.xml . For a sender , the broker creates a message consumer on a queue that sends messages to another broker. For a receiver , the broker creates a message producer on an address that receives messages from another broker. Both elements function as a message bridge. However, there is no additional overhead required to process messages. Senders and receivers behave just like any other consumer or producer in a broker. Specific queues can be configured by senders or receivers. Wildcard expressions can be used to match senders and receivers to specific addresses or sets of addresses. When configuring a sender or receiver, the following properties can be set: address-match : Match the sender or receiver to a specific address or set of addresses, using a wildcard expression. queue-name : Configure the sender or receiver for a specific queue. Using address expressions: <broker-connections> <amqp-connection uri="tcp://HOST:PORT" name="other-server"> <sender address-match="queues.#"/> <!-- notice the local queues for remotequeues.# need to be created on this broker --> <receiver address-match="remotequeues.#"/> </amqp-connection> </broker-connections> <addresses> <address name="remotequeues.A"> <anycast> <queue name="remoteQueueA"/> </anycast> </address> <address name="queues.B"> <anycast> <queue name="localQueueB"/> </anycast> </address> </addresses> Using queue names: <broker-connections> <amqp-connection uri="tcp://HOST:PORT" name="other-server"> <receiver queue-name="remoteQueueA"/> <sender queue-name="localQueueB"/> </amqp-connection> </broker-connections> <addresses> <address name="remotequeues.A"> <anycast> <queue name="remoteQueueA"/> </anycast> </address> <address name="queues.B"> <anycast> <queue name="localQueueB"/> </anycast> </address> </addresses> Note Receivers can only be matched to a local queue that already exists. Therefore, if receivers are being used, ensure that queues are pre-created locally. Otherwise, the broker cannot match the remote queues and addresses. Note Do not create a sender and a receiver with the same destination because this creates an infinite loop of sends and receives. 17.2. Peer configurations for broker connections The broker can be configured as a peer which connects to a AMQ Interconnect instance and instructs it that the broker will act as a store-and-forward queue for a given AMQP waypoint address configured on that router. In this scenario, clients connect to a router to send and receive messages using a waypoint address, and the router routes these messages to or from the queue on the broker. This peer configuration creates a sender and receiver pair for each destination matched in the broker connections configuration on the broker. These pairs include configurations that enable the router to collaborate with the broker. This feature avoids the requirement for the router to initiate a connection and create auto-links. For more information about possible router configurations, see Using the AMQ Interconnect router . With a peer configuration, the same properties are present as when there are senders and receivers. For example, a configuration where queues with names beginning queue . act as storage for the matching router waypoint address would be: <broker-connections> <amqp-connection uri="tcp://HOST:PORT" name="router"> <peer address-match="queues.#"/> </amqp-connection> </broker-connections> <addresses> <address name="queues.A"> <anycast> <queue name="queues.A"/> </anycast> </address> <address name="queues.B"> <anycast> <queue name="queues.B"/> </anycast> </address> </addresses> There must be a matching address waypoint configuration on the router. This instructs it to treat the particular router addresses the broker attaches to as waypoints. For example, see the following prefix-based router address configuration: For more information on this option, see Using the AMQ Interconnect router . Note Do not use the peer option to connect directly to another broker. If you use this option to connect to another broker, all messages become immediately ready to consume, creating an infinite echo of sends and receives.
[ "<broker-connections> <amqp-connection uri=\"tcp://HOST:PORT\" name=\"other-server\"> <sender address-match=\"queues.#\"/> <!-- notice the local queues for remotequeues.# need to be created on this broker --> <receiver address-match=\"remotequeues.#\"/> </amqp-connection> </broker-connections> <addresses> <address name=\"remotequeues.A\"> <anycast> <queue name=\"remoteQueueA\"/> </anycast> </address> <address name=\"queues.B\"> <anycast> <queue name=\"localQueueB\"/> </anycast> </address> </addresses>", "<broker-connections> <amqp-connection uri=\"tcp://HOST:PORT\" name=\"other-server\"> <receiver queue-name=\"remoteQueueA\"/> <sender queue-name=\"localQueueB\"/> </amqp-connection> </broker-connections> <addresses> <address name=\"remotequeues.A\"> <anycast> <queue name=\"remoteQueueA\"/> </anycast> </address> <address name=\"queues.B\"> <anycast> <queue name=\"localQueueB\"/> </anycast> </address> </addresses>", "<broker-connections> <amqp-connection uri=\"tcp://HOST:PORT\" name=\"router\"> <peer address-match=\"queues.#\"/> </amqp-connection> </broker-connections> <addresses> <address name=\"queues.A\"> <anycast> <queue name=\"queues.A\"/> </anycast> </address> <address name=\"queues.B\"> <anycast> <queue name=\"queues.B\"/> </anycast> </address> </addresses>", "address { prefix: queue waypoint: yes }" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/bridging-brokers-configuring
Chapter 12. Observability for hosted control planes
Chapter 12. Observability for hosted control planes You can gather metrics for hosted control planes by configuring metrics sets. The HyperShift Operator can create or delete monitoring dashboards in the management cluster for each hosted cluster that it manages. 12.1. Configuring metrics sets for hosted control planes Hosted control planes for Red Hat OpenShift Container Platform creates ServiceMonitor resources in each control plane namespace that allow a Prometheus stack to gather metrics from the control planes. The ServiceMonitor resources use metrics relabelings to define which metrics are included or excluded from a particular component, such as etcd or the Kubernetes API server. The number of metrics that are produced by control planes directly impacts the resource requirements of the monitoring stack that gathers them. Instead of producing a fixed number of metrics that apply to all situations, you can configure a metrics set that identifies a set of metrics to produce for each control plane. The following metrics sets are supported: Telemetry : These metrics are needed for telemetry. This set is the default set and is the smallest set of metrics. SRE : This set includes the necessary metrics to produce alerts and allow the troubleshooting of control plane components. All : This set includes all of the metrics that are produced by standalone OpenShift Container Platform control plane components. To configure a metrics set, set the METRICS_SET environment variable in the HyperShift Operator deployment by entering the following command: USD oc set env -n hypershift deployment/operator METRICS_SET=All 12.1.1. Configuring the SRE metrics set When you specify the SRE metrics set, the HyperShift Operator looks for a config map named sre-metric-set with a single key: config . The value of the config key must contain a set of RelabelConfigs that are organized by control plane component. You can specify the following components: etcd kubeAPIServer kubeControllerManager openshiftAPIServer openshiftControllerManager openshiftRouteControllerManager cvo olm catalogOperator registryOperator nodeTuningOperator controlPlaneOperator hostedClusterConfigOperator A configuration of the SRE metrics set is illustrated in the following example: kubeAPIServer: - action: "drop" regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_admission_controller_admission_latencies_seconds_.*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_admission_step_admission_latencies_seconds_.*" sourceLabels: ["__name__"] - action: "drop" regex: "scheduler_(e2e_scheduling_latency_microseconds|scheduling_algorithm_predicate_evaluation|scheduling_algorithm_priority_evaluation|scheduling_algorithm_preemption_evaluation|scheduling_algorithm_latency_microseconds|binding_latency_microseconds|scheduling_latency_seconds)" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_(request_count|request_latencies|request_latencies_summary|dropped_requests|storage_data_key_generation_latencies_microseconds|storage_transformation_failures_total|storage_transformation_latencies_microseconds|proxy_tunnel_sync_latency_secs)" sourceLabels: ["__name__"] - action: "drop" regex: "docker_(operations|operations_latency_microseconds|operations_errors|operations_timeout)" sourceLabels: ["__name__"] - action: "drop" regex: "reflector_(items_per_list|items_per_watch|list_duration_seconds|lists_total|short_watches_total|watch_duration_seconds|watches_total)" sourceLabels: ["__name__"] - action: "drop" regex: "etcd_(helper_cache_hit_count|helper_cache_miss_count|helper_cache_entry_count|request_cache_get_latencies_summary|request_cache_add_latencies_summary|request_latencies_summary)" sourceLabels: ["__name__"] - action: "drop" regex: "transformation_(transformation_latencies_microseconds|failures_total)" sourceLabels: ["__name__"] - action: "drop" regex: "network_plugin_operations_latency_microseconds|sync_proxy_rules_latency_microseconds|rest_client_request_latency_seconds" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)" sourceLabels: ["__name__", "le"] kubeControllerManager: - action: "drop" regex: "etcd_(debugging|disk|request|server).*" sourceLabels: ["__name__"] - action: "drop" regex: "rest_client_request_latency_seconds_(bucket|count|sum)" sourceLabels: ["__name__"] - action: "drop" regex: "root_ca_cert_publisher_sync_duration_seconds_(bucket|count|sum)" sourceLabels: ["__name__"] openshiftAPIServer: - action: "drop" regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_admission_controller_admission_latencies_seconds_.*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_admission_step_admission_latencies_seconds_.*" sourceLabels: ["__name__"] - action: "drop" regex: "apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)" sourceLabels: ["__name__", "le"] openshiftControllerManager: - action: "drop" regex: "etcd_(debugging|disk|request|server).*" sourceLabels: ["__name__"] openshiftRouteControllerManager: - action: "drop" regex: "etcd_(debugging|disk|request|server).*" sourceLabels: ["__name__"] olm: - action: "drop" regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] catalogOperator: - action: "drop" regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] cvo: - action: drop regex: "etcd_(debugging|disk|server).*" sourceLabels: ["__name__"] 12.2. Enabling monitoring dashboards in a hosted cluster To enable monitoring dashboards in a hosted cluster, complete the following steps: Procedure Create the hypershift-operator-install-flags config map in the local-cluster namespace, being sure to specify the --monitoring-dashboards flag in the data.installFlagsToAdd section. For example: kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: "--monitoring-dashboards" installFlagsToRemove: "" Wait a couple of minutes for the HyperShift Operator deployment in the hypershift namespace to be updated to include the following environment variable: - name: MONITORING_DASHBOARDS value: "1" When monitoring dashboards are enabled, for each hosted cluster that the HyperShift Operator manages, the Operator creates a config map named cp-<hosted_cluster_namespace>-<hosted_cluster_name> in the openshift-config-managed namespace, where <hosted_cluster_namespace> is the namespace of the hosted cluster and <hosted_cluster_name> is the name of the hosted cluster. As a result, a new dashboard is added in the administrative console of the management cluster. To view the dashboard, log in to the management cluster's console and go to the dashboard for the hosted cluster by clicking Observe Dashboards . Optional: To disable a monitoring dashboards in a hosted cluster, remove the --monitoring-dashboards flag from the hypershift-operator-install-flags config map. When you delete a hosted cluster, its corresponding dashboard is also deleted. 12.2.1. Dashboard customization To generate dashboards for each hosted cluster, the HyperShift Operator uses a template that is stored in the monitoring-dashboard-template config map in the Operator namespace ( hypershift ). This template contains a set of Grafana panels that contain the metrics for the dashboard. You can edit the content of the config map to customize the dashboards. When a dashboard is generated, the following strings are replaced with values that correspond to a specific hosted cluster: Name Description __NAME__ The name of the hosted cluster __NAMESPACE__ The namespace of the hosted cluster __CONTROL_PLANE_NAMESPACE__ The namespace where the control plane pods of the hosted cluster are placed __CLUSTER_ID__ The UUID of the hosted cluster, which matches the _id label of the hosted cluster metrics
[ "oc set env -n hypershift deployment/operator METRICS_SET=All", "kubeAPIServer: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_controller_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_step_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"scheduler_(e2e_scheduling_latency_microseconds|scheduling_algorithm_predicate_evaluation|scheduling_algorithm_priority_evaluation|scheduling_algorithm_preemption_evaluation|scheduling_algorithm_latency_microseconds|binding_latency_microseconds|scheduling_latency_seconds)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_(request_count|request_latencies|request_latencies_summary|dropped_requests|storage_data_key_generation_latencies_microseconds|storage_transformation_failures_total|storage_transformation_latencies_microseconds|proxy_tunnel_sync_latency_secs)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"docker_(operations|operations_latency_microseconds|operations_errors|operations_timeout)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"reflector_(items_per_list|items_per_watch|list_duration_seconds|lists_total|short_watches_total|watch_duration_seconds|watches_total)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"etcd_(helper_cache_hit_count|helper_cache_miss_count|helper_cache_entry_count|request_cache_get_latencies_summary|request_cache_add_latencies_summary|request_latencies_summary)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"transformation_(transformation_latencies_microseconds|failures_total)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"network_plugin_operations_latency_microseconds|sync_proxy_rules_latency_microseconds|rest_client_request_latency_seconds\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)\" sourceLabels: [\"__name__\", \"le\"] kubeControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"rest_client_request_latency_seconds_(bucket|count|sum)\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"root_ca_cert_publisher_sync_duration_seconds_(bucket|count|sum)\" sourceLabels: [\"__name__\"] openshiftAPIServer: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_controller_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_admission_step_admission_latencies_seconds_.*\" sourceLabels: [\"__name__\"] - action: \"drop\" regex: \"apiserver_request_duration_seconds_bucket;(0.15|0.25|0.3|0.35|0.4|0.45|0.6|0.7|0.8|0.9|1.25|1.5|1.75|2.5|3|3.5|4.5|6|7|8|9|15|25|30|50)\" sourceLabels: [\"__name__\", \"le\"] openshiftControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] openshiftRouteControllerManager: - action: \"drop\" regex: \"etcd_(debugging|disk|request|server).*\" sourceLabels: [\"__name__\"] olm: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] catalogOperator: - action: \"drop\" regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"] cvo: - action: drop regex: \"etcd_(debugging|disk|server).*\" sourceLabels: [\"__name__\"]", "kind: ConfigMap apiVersion: v1 metadata: name: hypershift-operator-install-flags namespace: local-cluster data: installFlagsToAdd: \"--monitoring-dashboards\" installFlagsToRemove: \"\"", "- name: MONITORING_DASHBOARDS value: \"1\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/hosted_control_planes/observability-for-hosted-control-planes
Product Guide
Product Guide Red Hat Virtualization 4.4 Introduction to Red Hat Virtualization 4.4 Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This document provides an introduction to Red Hat Virtualization.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/product_guide/index
Chapter 4. Installing
Chapter 4. Installing 4.1. Preparing your cluster for OpenShift Virtualization Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements. Important Installation method considerations You can use any installation method, including user-provisioned, installer-provisioned, or assisted installer, to deploy OpenShift Container Platform. However, the installation method and the cluster topology might affect OpenShift Virtualization functionality, such as snapshots or live migration . Red Hat OpenShift Data Foundation If you deploy OpenShift Virtualization with Red Hat OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details. IPv6 You cannot run OpenShift Virtualization on a single-stack IPv6 cluster. FIPS mode If you install your cluster in FIPS mode , no additional setup is required for OpenShift Virtualization. 4.1.1. Supported platforms You can use the following platforms with OpenShift Virtualization: On-premise bare metal servers. See Planning a bare metal cluster for OpenShift Virtualization . IBM Cloud(R) Bare Metal Servers. See Deploy OpenShift Virtualization on IBM Cloud(R) Bare Metal nodes . Important Installing OpenShift Virtualization on IBM Cloud(R) Bare Metal Servers is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Bare metal instances or servers offered by other cloud providers are not supported. 4.1.1.1. OpenShift Virtualization on AWS bare metal You can run OpenShift Virtualization on an Amazon Web Services (AWS) bare-metal OpenShift Container Platform cluster. Note OpenShift Virtualization is also supported on Red Hat OpenShift Service on AWS (ROSA) Classic clusters, which have the same configuration requirements as AWS bare-metal clusters. Before you set up your cluster, review the following summary of supported features and limitations: Installing You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes by editing the install-config.yaml file. For example, you can use the c5n.metal type value for a machine based on x86_64 architecture. For more information, see the OpenShift Container Platform documentation about installing on AWS. Accessing virtual machines (VMs) There is no change to how you access VMs by using the virtctl CLI tool or the OpenShift Container Platform web console. You can expose VMs by using a NodePort or LoadBalancer service. The load balancer approach is preferable because OpenShift Container Platform automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, OpenShift Container Platform removes the load balancer and its associated resources. Networking You cannot use Single Root I/O Virtualization (SR-IOV) or bridge Container Network Interface (CNI) networks, including virtual LAN (VLAN). If your application requires a flat layer 2 network or control over the IP pool, consider using OVN-Kubernetes secondary overlay networks. Storage You can use any storage solution that is certified by the storage vendor to work with the underlying platform. Important AWS bare-metal and ROSA clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor. Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations. Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities. Hosted control planes (HCPs) HCPs for OpenShift Virtualization are not currently supported on AWS infrastructure. Additional resources Connecting a virtual machine to an OVN-Kubernetes secondary network Exposing a virtual machine by using a service 4.1.2. Hardware and operating system requirements Review the following hardware and operating system requirements for OpenShift Virtualization. 4.1.2.1. CPU requirements Supported by Red Hat Enterprise Linux (RHEL) 9. See Red Hat Ecosystem Catalog for supported CPUs. Note If your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines. See Configuring a required node affinity rule for details. Support for AMD and Intel 64-bit architectures (x86-64-v2). Support for Intel 64 or AMD64 CPU extensions. Intel VT or AMD-V hardware virtualization extensions enabled. NX (no execute) flag enabled. 4.1.2.2. Operating system requirements Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes. See About RHCOS for details. Note RHEL worker nodes are not supported. 4.1.2.3. Storage requirements Supported by OpenShift Container Platform. See Optimizing storage . You must create a default OpenShift Virtualization or OpenShift Container Platform storage class. The purpose of this is to address the unique storage needs of VM workloads and offer optimized performance, reliability, and user experience. If both OpenShift Virtualization and OpenShift Container Platform default storage classes exist, the OpenShift Virtualization class takes precedence when creating VM disks. Note You must specify a default storage class for the cluster. See Managing the default storage class . If the default storage class provisioner supports the ReadWriteMany (RWX) access mode , use the RWX mode for the associated persistent volumes for optimal performance. If the storage provisioner supports snapshots, there must be a VolumeSnapshotClass object associated with the default storage class. 4.1.2.3.1. About volume and access modes for virtual machine disks If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: ReadWriteMany (RWX) access mode is required for live migration. The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage. For example, if you use Red Hat OpenShift Data Foundation, Ceph RBD volumes are preferable to CephFS volumes. Important You cannot live migrate virtual machines with the following configurations: Storage volume with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs Do not set the evictionStrategy field to LiveMigrate for these virtual machines. 4.1.3. Live migration requirements Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. Note You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation: The default number of migrations that can run in parallel in the cluster is 5. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration. 4.1.4. Physical resource overhead requirements OpenShift Virtualization is an add-on to OpenShift Container Platform and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the OpenShift Container Platform requirements. Oversubscribing the physical resources in a cluster can affect performance. Important The numbers noted in this documentation are based on Red Hat's test methodology and setup. These numbers can vary based on your own individual setup and environments. Memory overhead Calculate the memory overhead values for OpenShift Virtualization by using the equations below. Cluster memory overhead Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes. Virtual machine memory overhead 1 Required for the processes that run in the virt-launcher pod. 2 Number of virtual CPUs requested by the virtual machine. 3 Number of virtual graphics cards requested by the virtual machine. 4 Additional memory overhead: If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device. If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB. If Trusted Platform Module (TPM) is enabled, add 53 MiB. CPU overhead Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup. Cluster CPU overhead OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes. Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads. Virtual machine CPU overhead If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires. Storage overhead Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment. Cluster storage overhead 10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization. Virtual machine storage overhead Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself. Example As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores. 4.1.5. Single-node OpenShift differences You can install OpenShift Virtualization on single-node OpenShift. However, you should be aware that Single-node OpenShift does not support the following features: High availability Pod disruption Live migration Virtual machines or templates that have an eviction strategy configured Additional resources Glossary of common terms for OpenShift Container Platform storage 4.1.6. Object maximums You must consider the following tested object maximums when planning your cluster: OpenShift Container Platform object maximums . OpenShift Virtualization object maximums . 4.1.7. Cluster high-availability options You can configure one of the following high-availability (HA) options for your cluster: Automatic high availability for installer-provisioned infrastructure (IPI) is available by deploying machine health checks . Note In OpenShift Container Platform clusters installed using installer-provisioned infrastructure and with a properly configured MachineHealthCheck resource, if a node fails the machine health check and becomes unavailable to the cluster, it is recycled. What happens with VMs that ran on the failed node depends on a series of conditions. See Run strategies for more detailed information about the potential outcomes and how run strategies affect those outcomes. Automatic high availability for both IPI and non-IPI is available by using the Node Health Check Operator on the OpenShift Container Platform cluster to deploy the NodeHealthCheck controller. The controller identifies unhealthy nodes and uses a remediation provider, such as the Self Node Remediation Operator or Fence Agents Remediation Operator, to remediate the unhealthy nodes. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. High availability for any platform is available by using either a monitoring system or a qualified human to monitor node availability. When a node is lost, shut it down and run oc delete node <lost_node> . Note Without an external monitoring system or a qualified human monitoring node health, virtual machines lose high availability. 4.2. Installing OpenShift Virtualization Install OpenShift Virtualization to add virtualization functionality to your OpenShift Container Platform cluster. Important If you install OpenShift Virtualization in a restricted environment with no internet connectivity, you must configure Operator Lifecycle Manager (OLM) for restricted networks . If you have limited internet connectivity, you can configure proxy support in OLM to access the OperatorHub. 4.2.1. Installing the OpenShift Virtualization Operator Install the OpenShift Virtualization Operator by using the OpenShift Container Platform web console or the command line. 4.2.1.1. Installing the OpenShift Virtualization Operator by using the web console You can deploy the OpenShift Virtualization Operator by using the OpenShift Container Platform web console. Prerequisites Install OpenShift Container Platform 4.14 on your cluster. Log in to the OpenShift Container Platform web console as a user with cluster-admin permissions. Procedure From the Administrator perspective, click Operators OperatorHub . In the Filter by keyword field, type Virtualization . Select the OpenShift Virtualization Operator tile with the Red Hat source label. Read the information about the Operator and click Install . On the Install Operator page: Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. For Installed Namespace , ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist. Warning Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail. For Approval Strategy , it is highly recommended that you select Automatic , which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel. While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic . Warning Because OpenShift Virtualization is only supported when used with the corresponding OpenShift Container Platform version, missing OpenShift Virtualization updates can cause your cluster to become unsupported. Click Install to make the Operator available to the openshift-cnv namespace. When the Operator installs successfully, click Create HyperConverged . Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components. Click Create to launch OpenShift Virtualization. Verification Navigate to the Workloads Pods page and monitor the OpenShift Virtualization pods until they are all Running . After all the pods display the Running state, you can use OpenShift Virtualization. 4.2.1.2. Installing the OpenShift Virtualization Operator by using the command line Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster. 4.2.1.2.1. Subscribing to the OpenShift Virtualization catalog by using the CLI Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators. To subscribe, configure Namespace , OperatorGroup , and Subscription objects by applying a single manifest to your cluster. Prerequisites Install OpenShift Container Platform 4.14 on your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file that contains the following manifest: apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.14.11 channel: "stable" 1 1 Using the stable channel ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. Create the required Namespace , OperatorGroup , and Subscription objects for OpenShift Virtualization by running the following command: USD oc apply -f <file name>.yaml Note You can configure certificate rotation parameters in the YAML file. 4.2.1.2.2. Deploying the OpenShift Virtualization Operator by using the CLI You can deploy the OpenShift Virtualization Operator by using the oc CLI. Prerequisites Subscribe to the OpenShift Virtualization catalog in the openshift-cnv namespace. Log in as a user with cluster-admin privileges. Procedure Create a YAML file that contains the following manifest: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: Deploy the OpenShift Virtualization Operator by running the following command: USD oc apply -f <file_name>.yaml Verification Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command: USD watch oc get csv -n openshift-cnv The following output displays if deployment was successful: Example output NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.14.11 OpenShift Virtualization 4.14.11 Succeeded 4.2.2. steps The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first. 4.3. Uninstalling OpenShift Virtualization You uninstall OpenShift Virtualization by using the web console or the command line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources. 4.3.1. Uninstalling OpenShift Virtualization by using the web console You uninstall OpenShift Virtualization by using the web console to perform the following tasks: Delete the HyperConverged CR . Delete the OpenShift Virtualization Operator . Delete the openshift-cnv namespace . Delete the OpenShift Virtualization custom resource definitions (CRDs) . Important You must first delete all virtual machines , and virtual machine instances . You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster. 4.3.1.1. Deleting the HyperConverged custom resource To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Select the OpenShift Virtualization Operator. Click the OpenShift Virtualization Deployment tab. Click the Options menu beside kubevirt-hyperconverged and select Delete HyperConverged . Click Delete in the confirmation window. 4.3.1.2. Deleting Operators from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites You have access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions list. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 4.3.1.3. Deleting a namespace using the web console You can delete a namespace by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to Administration Namespaces . Locate the namespace that you want to delete in the list of namespaces. On the far right side of the namespace listing, select Delete Namespace from the Options menu . When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field. Click Delete . 4.3.1.4. Deleting OpenShift Virtualization custom resource definitions You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure Navigate to Administration CustomResourceDefinitions . Select the Label filter and enter operators.coreos.com/kubevirt-hyperconverged.openshift-cnv in the Search field to display the OpenShift Virtualization CRDs. Click the Options menu beside each CRD and select Delete CustomResourceDefinition . 4.3.2. Uninstalling OpenShift Virtualization by using the CLI You can uninstall OpenShift Virtualization by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster. Procedure Delete the HyperConverged custom resource: USD oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv Delete the OpenShift Virtualization Operator subscription: USD oc delete subscription kubevirt-hyperconverged -n openshift-cnv Delete the OpenShift Virtualization ClusterServiceVersion resource: USD oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Delete the OpenShift Virtualization namespace: USD oc delete namespace openshift-cnv List the OpenShift Virtualization custom resource definitions (CRDs) by running the oc delete crd command with the dry-run option: USD oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Example output Delete the CRDs by running the oc delete crd command without the dry-run option: USD oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Additional resources Deleting virtual machines Deleting virtual machine instances
[ "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "Memory overhead per infrastructure node ~ 150 MiB", "Memory overhead per worker node ~ 360 MiB", "Memory overhead per virtual machine ~ (1.002 x requested memory) + 218 MiB \\ 1 + 8 MiB x (number of vCPUs) \\ 2 + 16 MiB x (number of graphics devices) \\ 3 + (additional memory overhead) 4", "CPU overhead for infrastructure nodes ~ 4 cores", "CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine", "Aggregated storage overhead per node ~ 10 GiB", "apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v4.14.11 channel: \"stable\" 1", "oc apply -f <file name>.yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:", "oc apply -f <file_name>.yaml", "watch oc get csv -n openshift-cnv", "NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.14.11 OpenShift Virtualization 4.14.11 Succeeded", "oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv", "oc delete subscription kubevirt-hyperconverged -n openshift-cnv", "oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc delete namespace openshift-cnv", "oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "customresourcedefinition.apiextensions.k8s.io \"cdis.cdi.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hostpathprovisioners.hostpathprovisioner.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hyperconvergeds.hco.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"kubevirts.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"ssps.ssp.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"tektontasks.tektontasks.kubevirt.io\" deleted (dry run)", "oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/virtualization/installing
Chapter 2. Upgrading the Red Hat Quay Operator Overview
Chapter 2. Upgrading the Red Hat Quay Operator Overview The Red Hat Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Red Hat Quay and the components that it manages. There is no field on the QuayRegistry custom resource which sets the version of Red Hat Quay to deploy ; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Red Hat Quay on Kubernetes. 2.1. Operator Lifecycle Manager The Red Hat Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM) . When creating a Subscription with the default approvalStrategy: Automatic , OLM will automatically upgrade the Red Hat Quay Operator whenever a new version becomes available. Warning When the Red Hat Quay Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the OperatorHub page for the Red Hat Quay Operator during installation. It can also be found in the Red Hat Quay Operator Subscription object by the approvalStrategy field. Choosing Automatic means that your Red Hat Quay Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the Manual approval strategy should be selected. 2.2. Upgrading the Red Hat Quay Operator The standard approach for upgrading installed Operators on OpenShift Container Platform is documented at Upgrading installed Operators . In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z 3.4.z 3.5.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported for Red Hat Quay 3.13: 3.11.z 3.13.z 3.12.z 3.13.z For users on standalone deployments of Red Hat Quay wanting to upgrade to 3.13, see the Standalone upgrade guide. 2.2.1. Upgrading Red Hat Quay to version 3.13 To update Red Hat Quay from one minor version to the , for example, 3.12.z 3.13, you must change the update channel for the Red Hat Quay Operator. Procedure In the OpenShift Container Platform Web Console, navigate to Operators Installed Operators . Click on the Red Hat Quay Operator. Navigate to the Subscription tab. Under Subscription details click Update channel . Select stable-3.13 Save . Check the progress of the new installation under Upgrade status . Wait until the upgrade status changes to 1 installed before proceeding. In your OpenShift Container Platform cluster, navigate to Workloads Pods . Existing pods should be terminated, or in the process of being terminated. Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up: clair-postgres-upgrade , quay-postgres-upgrade , and quay-app-upgrade . After the clair-postgres-upgrade , quay-postgres-upgrade , and quay-app-upgrade pods are marked as Completed , the remaining pods for your Red Hat Quay deployment spin up. This takes approximately ten minutes. Verify that the quay-database uses the postgresql-13 image, and clair-postgres pods now uses the postgresql-15 image. After the quay-app pod is marked as Running , you can reach your Red Hat Quay registry. 2.2.2. Upgrading to the minor release version For z stream upgrades, for example, 3.12.1 3.12.2, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z stream upgrade depends on the approvalStrategy as outlined above. If the approval strategy is set to Automatic , the Red Hat Quay Operator upgrades automatically to the newest z stream. This results in automatic, rolling Red Hat Quay updates to newer z streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin. 2.2.3. Upgrading from Red Hat Quay 3.12 to 3.13 With Red Hat Quay 3.13, the volumeSize parameter has been implemented for use with the clairpostgres component of the QuayRegistry custom resource definition (CRD). This replaces the volumeSize parameter that was previously used for the clair component of the same CRD. If your Red Hat Quay 3.12 QuayRegistry custom resource definition (CRD) implemented a volume override for the clair component, you must ensure that the volumeSize field is included under the clairpostgres component of the QuayRegistry CRD. Important Failure to move volumeSize from the clair component to the clairpostgres component will result in a failed upgrade to version 3.13. For example: spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size> 2.2.4. Changing the update channel for the Red Hat Quay Operator The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Red Hat Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Red Hat Quay Operator. For subscriptions with an Automatic approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators. 2.2.5. Manually approving a pending Operator upgrade If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Red Hat Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription tab for the Red Hat Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve and return to the page that lists Installed Operators to monitor the progress of the upgrade. The following image shows the Subscription tab in the UI, including the update Channel , the Approval strategy, the Upgrade status and the InstallPlan : The list of Installed Operators provides a high-level summary of the current Quay installation: 2.3. Upgrading a QuayRegistry resource When the Red Hat Quay Operator starts, it immediately looks for any QuayRegistries it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used: If status.currentVersion is unset, reconcile as normal. If status.currentVersion equals the Operator version, reconcile as normal. If status.currentVersion does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the status.currentVersion to the Operator's version once complete. If it cannot be upgraded, return an error and leave the QuayRegistry and its deployed Kubernetes objects alone. 2.4. Upgrading a QuayEcosystem Upgrades are supported from versions of the Operator which used the QuayEcosystem API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem for it to be migrated. A new QuayRegistry will be created for the Operator to manage, but the old QuayEcosystem will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem to a new QuayRegistry , use the following procedure. Procedure Add "quay-operator/migrate": "true" to the metadata.labels of the QuayEcosystem . USD oc edit quayecosystem <quayecosystemname> metadata: labels: quay-operator/migrate: "true" Wait for a QuayRegistry to be created with the same metadata.name as your QuayEcosystem . The QuayEcosystem will be marked with the label "quay-operator/migration-complete": "true" . After the status.registryEndpoint of the new QuayRegistry is set, access Red Hat Quay and confirm that all data and settings were migrated successfully. If everything works correctly, you can delete the QuayEcosystem and Kubernetes garbage collection will clean up all old resources. 2.4.1. Reverting QuayEcosystem Upgrade If something goes wrong during the automatic upgrade from QuayEcosystem to QuayRegistry , follow these steps to revert back to using the QuayEcosystem : Procedure Delete the QuayRegistry using either the UI or kubectl : USD kubectl delete -n <namespace> quayregistry <quayecosystem-name> If external access was provided using a Route , change the Route to point back to the original Service using the UI or kubectl . Note If your QuayEcosystem was managing the PostgreSQL database, the upgrade process will migrate your data to a new PostgreSQL database managed by the upgraded Operator. Your old database will not be changed or removed but Red Hat Quay will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component. 2.4.2. Supported QuayEcosystem Configurations for Upgrades The Red Hat Quay Operator reports errors in its logs and in status.conditions if migrating a QuayEcosystem component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Red Hat Quay's config.yaml file. Database Ephemeral database not supported ( volumeSize field must be set). Redis Nothing special needed. External Access Only passthrough Route access is supported for automatic migration. Manual migration required for other methods. LoadBalancer without custom hostname: After the QuayEcosystem is marked with label "quay-operator/migration-complete": "true" , delete the metadata.ownerReferences field from existing Service before deleting the QuayEcosystem to prevent Kubernetes from garbage collecting the Service and removing the load balancer. A new Service will be created with metadata.name format <QuayEcosystem-name>-quay-app . Edit the spec.selector of the existing Service to match the spec.selector of the new Service so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old Service ; the Quay Operator will not manage it. LoadBalancer / NodePort / Ingress with custom hostname: A new Service of type LoadBalancer will be created with metadata.name format <QuayEcosystem-name>-quay-app . Change your DNS settings to point to the status.loadBalancer endpoint provided by the new Service . Clair Nothing special needed. Object Storage QuayEcosystem did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported. Repository Mirroring Nothing special needed.
[ "spec: components: - kind: clair managed: true - kind: clairpostgres managed: true overrides: volumeSize: <volume_size>", "oc edit quayecosystem <quayecosystemname>", "metadata: labels: quay-operator/migrate: \"true\"", "kubectl delete -n <namespace> quayregistry <quayecosystem-name>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/upgrade_red_hat_quay/operator-upgrade
Chapter 4. Post-installation integrations
Chapter 4. Post-installation integrations After installing RHTAP, complete the following tasks to ensure that RHTAP works properly. 4.1. (Optional) Integrating Quay into ACS If you are using your own Quay instance instead of Quay.io, or if you plan to use private repositories in Quay, then you must integrate Quay into ACS. This ensures ACS has access to the repositories you use in Quay. Procedure Go to your ACS instance. If you did not have ACS before installing RHTAP, you can find the access details in the rhtap-cli deploy command output, which you saved to ~/install_values.txt at the end of the installation procedure. Follow the instructions in the Red Hat Advanced Cluster Security for Kubernetes 4.6 documentation to integrate Quay into ACS. Revised on 2025-02-13 19:17:36 UTC
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/installing_red_hat_trusted_application_pipeline/post-install
Chapter 18. CXF
Chapter 18. CXF Both producer and consumer are supported The CXF component provides integration with Apache CXF for connecting to JAX-WS services hosted in CXF. Tip When using CXF in streaming modes (see DataFormat option), then also read about Stream caching. Maven users must add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cxf-soap</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> 18.1. URI format There are two URI formats for this endpoint: cxfEndpoint and someAddress . Where cxfEndpoint represents a bean ID that references a bean in the Spring bean registry. With this URI format, most of the endpoint details are specified in the bean definition. Where someAddress specifies the CXF endpoint's address. With this URI format, most of the endpoint details are specified using options. For either style above, you can append options to the URI as follows: 18.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 18.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 18.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 18.3. Component Options The CXF component supports 6 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean allowStreaming (advanced) This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy useGlobalSslContextParameters (security) Enable usage of global SSL context parameters. false boolean 18.4. Endpoint Options The CXF endpoint is configured using URI syntax: with the following path and query parameters: 18.4.1. Path Parameters (2 parameters) Name Description Default Type beanId (common) To lookup an existing configured CxfEndpoint. Must used bean: as prefix. String address (service) The service publish address. String 18.4.2. Query Parameters (35 parameters) Name Description Default Type dataFormat (common) The data type messages supported by the CXF endpoint. Enum values: PAYLOAD RAW MESSAGE CXF_MESSAGE POJO POJO DataFormat wrappedStyle (common) The WSDL style that describes how parameters are represented in the SOAP body. If the value is false, CXF will chose the document-literal unwrapped style, If the value is true, CXF will chose the document-literal wrapped style. Boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern cookieHandler (producer) Configure a cookie handler to maintain a HTTP session. CookieHandler defaultOperationName (producer) This option will set the default operationName that will be used by the CxfProducer which invokes the remote service. String defaultOperationNamespace (producer) This option will set the default operationNamespace that will be used by the CxfProducer which invokes the remote service. String hostnameVerifier (producer) The hostname verifier to be used. Use the # notation to reference a HostnameVerifier from the registry. HostnameVerifier lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean sslContextParameters (producer) The Camel SSL setting reference. Use the # notation to reference the SSL Context. SSLContextParameters wrapped (producer) Which kind of operation that CXF endpoint producer will invoke. false boolean synchronous (producer (advanced)) Sets whether synchronous processing should be strictly used. false boolean allowStreaming (advanced) This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean bus (advanced) To use a custom configured CXF Bus. Bus continuationTimeout (advanced) This option is used to set the CXF continuation timeout which could be used in CxfConsumer by default when the CXF server is using Jetty or Servlet transport. 30000 long cxfBinding (advanced) To use a custom CxfBinding to control the binding between Camel Message and CXF Message. CxfBinding cxfConfigurer (advanced) This option could apply the implementation of org.apache.camel.component.cxf.CxfEndpointConfigurer which supports to configure the CXF endpoint in programmatic way. User can configure the CXF server and client by implementing configure{ServerClient} method of CxfEndpointConfigurer. CxfConfigurer defaultBus (advanced) Will set the default bus when CXF endpoint create a bus by itself. false boolean headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy mergeProtocolHeaders (advanced) Whether to merge protocol headers. If enabled then propagating headers between Camel and CXF becomes more consistent and similar. For more details see CAMEL-6393. false boolean mtomEnabled (advanced) To enable MTOM (attachments). This requires to use POJO or PAYLOAD data format mode. false boolean properties (advanced) To set additional CXF options using the key/value pairs from the Map. For example to turn on stacktraces in SOAP faults, properties.faultStackTraceEnabled=true. Map skipPayloadMessagePartCheck (advanced) Sets whether SOAP message validation should be disabled. false boolean loggingFeatureEnabled (logging) This option enables CXF Logging Feature which writes inbound and outbound SOAP messages to log. false boolean loggingSizeLimit (logging) To limit the total size of number of bytes the logger will output when logging feature has been enabled and -1 for no limit. 49152 int skipFaultLogging (logging) This option controls whether the PhaseInterceptorChain skips logging the Fault that it catches. false boolean password (security) This option is used to set the basic authentication information of password for the CXF client. String username (security) This option is used to set the basic authentication information of username for the CXF client. String bindingId (service) The bindingId for the service model to use. String portName (service) The endpoint name this service is implementing, it maps to the wsdl:portname. In the format of ns:PORT_NAME where ns is a namespace prefix valid at this scope. String publishedEndpointUrl (service) This option can override the endpointUrl that published from the WSDL which can be accessed with service address url plus wsd. String serviceClass (service) The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not. Class serviceName (service) The service name this service is implementing, it maps to the wsdl:servicename. String wsdlURL (service) The location of the WSDL. Can be on the classpath, file system, or be hosted remotely. String The serviceName and portName are QNames , so if you provide them be sure to prefix them with their {namespace} as shown in the examples above. 18.4.3. Descriptions of the dataformats In Apache Camel, the Camel CXF component is the key to integrating routes with Web services. You can use the Camel CXF component to create a CXF endpoint, which can be used in either of the following ways: Consumer - (at the start of a route) represents a Web service instance, which integrates with the route. The type of payload injected into the route depends on the value of the endpoint's dataFormat option. Producer - (at other points in the route) represents a WS client proxy, which converts the current exchange object into an operation invocation on a remote Web service. The format of the current exchange must match the endpoint's dataFormat setting. DataFormat Description POJO POJOs (Plain old Java objects) are the Java parameters to the method being invoked on the target server. Both Protocol and Logical JAX-WS handlers are supported. PAYLOAD PAYLOAD is the message payload (the contents of the soap:body ) after message configuration in the CXF endpoint is applied. Only Protocol JAX-WS handler is supported. Logical JAX-WS handler is not supported. RAW RAW mode provides the raw message stream that is received from the transport layer. It is not possible to touch or change the stream, some of the CXF interceptors will be removed if you are using this kind of DataFormat, so you can't see any soap headers after the camel-cxf consumer. JAX-WS handler is not supported. CXF_MESSAGE CXF_MESSAGE allows for invoking the full capabilities of CXF interceptors by converting the message from the transport layer into a raw SOAP message You can determine the data format mode of an exchange by retrieving the exchange property, CamelCXFDataFormat . The exchange key constant is defined in org.apache.camel.component.cxf.common.message.CxfConstants.DATA_FORMAT_PROPERTY . 18.4.4. How to enable CXF's LoggingOutInterceptor in RAW mode CXF's LoggingOutInterceptor outputs outbound message that goes on the wire to logging system (Java Util Logging). Since the LoggingOutInterceptor is in PRE_STREAM phase (but PRE_STREAM phase is removed in RAW mode), you have to configure LoggingOutInterceptor to be run during the WRITE phase. The following is an example. @Bean public CxfEndpoint serviceEndpoint(LoggingOutInterceptor loggingOutInterceptor) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setAddress("http://localhost:" + port + "/services" + SERVICE_ADDRESS); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.HelloService.class); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "RAW"); cxfEndpoint.setProperties(properties); cxfEndpoint.getOutInterceptors().add(loggingOutInterceptor); return cxfEndpoint; } @Bean public LoggingOutInterceptor loggingOutInterceptor() { LoggingOutInterceptor logger = new LoggingOutInterceptor("write"); return logger; } 18.4.5. Description of relayHeaders option There are in-band and out-of-band on-the-wire headers from the perspective of a JAXWS WSDL-first developer. The in-band headers are headers that are explicitly defined as part of the WSDL binding contract for an endpoint such as SOAP headers. The out-of-band headers are headers that are serialized over the wire, but are not explicitly part of the WSDL binding contract. Headers relaying/filtering is bi-directional. When a route has a CXF endpoint and the developer needs to have on-the-wire headers, such as SOAP headers, be relayed along the route to be consumed say by another JAXWS endpoint, then relayHeaders should be set to true , which is the default value. 18.4.6. Available only in POJO mode The relayHeaders=true expresses an intent to relay the headers. The actual decision on whether a given header is relayed is delegated to a pluggable instance that implements the MessageHeadersRelay interface. A concrete implementation of MessageHeadersRelay will be consulted to decide if a header needs to be relayed or not. There is already an implementation of SoapMessageHeadersRelay which binds itself to well-known SOAP name spaces. Currently only out-of-band headers are filtered, and in-band headers will always be relayed when relayHeaders=true . If there is a header on the wire whose name space is unknown to the runtime, then a fall back DefaultMessageHeadersRelay will be used, which simply allows all headers to be relayed. The relayHeaders=false setting specifies that all headers in-band and out-of-band should be dropped. You can plugin your own MessageHeadersRelay implementations overriding or adding additional ones to the list of relays. In order to override a preloaded relay instance just make sure that your MessageHeadersRelay implementation services the same name spaces as the one you looking to override. Also note, that the overriding relay has to service all of the name spaces as the one you looking to override, or else a runtime exception on route start up will be thrown as this would introduce an ambiguity in name spaces to relay instance mappings. <cxf:cxfEndpoint ...> <cxf:properties> <entry key="org.apache.camel.cxf.message.headers.relays"> <list> <ref bean="customHeadersRelay"/> </list> </entry> </cxf:properties> </cxf:cxfEndpoint> <bean id="customHeadersRelay" class="org.apache.camel.component.cxf.soap.headers.CustomHeadersRelay"/> Take a look at the tests that show how you would be able to relay/drop headers here: CxfMessageHeadersRelayTest POJO and PAYLOAD modes are supported. In POJO mode, only out-of-band message headers are available for filtering as the in-band headers have been processed and removed from header list by CXF. The in-band headers are incorporated into the MessageContentList in POJO mode. The camel-cxf component does make any attempt to remove the in-band headers from the MessageContentList . If filtering of in-band headers is required, please use PAYLOAD mode or plug in a (pretty straightforward) CXF interceptor/JAXWS Handler to the CXF endpoint. The Message Header Relay mechanism has been merged into CxfHeaderFilterStrategy . The relayHeaders option, its semantics, and default value remain the same, but it is a property of CxfHeaderFilterStrategy . Here is an example of configuring it. @Bean public HeaderFilterStrategy dropAllMessageHeadersStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); headerFilterStrategy.setRelayHeaders(false); return headerFilterStrategy; } Then, your endpoint can reference the CxfHeaderFilterStrategy . @Bean public CxfEndpoint routerNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpoint"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } @Bean public CxfEndpoint serviceNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("http://localhost:" + port + "/services/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpointBackend"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } Then configure the route as follows: rom("cxf:bean:routerNoRelayEndpoint") .to("cxf:bean:serviceNoRelayEndpoint"); The MessageHeadersRelay interface has changed slightly and has been renamed to MessageHeaderFilter . It is a property of CxfHeaderFilterStrategy . Here is an example of configuring user defined Message Header Filters: @Bean public HeaderFilterStrategy customMessageFilterStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); List<MessageHeaderFilter> headerFilterList = new ArrayList<MessageHeaderFilter>(); headerFilterList.add(new SoapMessageHeaderFilter()); headerFilterList.add(new CustomHeaderFilter()); headerFilterStrategy.setMessageHeaderFilters(headerFilterList); return headerFilterStrategy; } In addition to relayHeaders , the following properties can be configured in CxfHeaderFilterStrategy . Name Required Description relayHeaders No All message headers will be processed by Message Header Filters Type : boolean Default : true relayAllMessageHeaders No All message headers will be propagated (without processing by Message Header Filters) Type : boolean Default : false allowFilterNamespaceClash No If two filters overlap in activation namespace, the property control how it should be handled. If the value is true , last one wins. If the value is false , it will throw an exception Type : boolean Default : false 18.5. Configure the CXF endpoints with Spring You can configure the CXF endpoint with the Spring configuration file shown below, and you can also embed the endpoint into the camelContext tags. When you are invoking the service endpoint, you can set the operationName and operationNamespace headers to explicitly state which operation you are calling. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:cxf="http://camel.apache.org/schema/cxf/jaxws" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <cxf:cxfEndpoint id="routerEndpoint" address="http://localhost:9003/CamelContext/RouterPort" serviceClass="org.apache.hello_world_soap_http.GreeterImpl"/> <cxf:cxfEndpoint id="serviceEndpoint" address="http://localhost:9000/SoapContext/SoapPort" wsdlURL="testutils/hello_world.wsdl" serviceClass="org.apache.hello_world_soap_http.Greeter" endpointName="s:SoapPort" serviceName="s:SOAPService" xmlns:s="http://apache.org/hello_world_soap_http" /> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="cxf:bean:routerEndpoint" /> <to uri="cxf:bean:serviceEndpoint" /> </route> </camelContext> </beans> Be sure to include the JAX-WS schemaLocation attribute specified on the root beans element. This allows CXF to validate the file and is required. Also note the namespace declarations at the end of the <cxf:cxfEndpoint/> tag. These declarations are required because the combined {namespace}localName syntax is presently not supported for this tag's attribute values. The cxf:cxfEndpoint element supports many additional attributes: Name Value PortName The endpoint name this service is implementing, it maps to the wsdl:port@name . In the format of ns:PORT_NAME where ns is a namespace prefix valid at this scope. serviceName The service name this service is implementing, it maps to the wsdl:service@name . In the format of ns:SERVICE_NAME where ns is a namespace prefix valid at this scope. wsdlURL The location of the WSDL. Can be on the classpath, file system, or be hosted remotely. bindingId The bindingId for the service model to use. address The service publish address. bus The bus name that will be used in the JAX-WS endpoint. serviceClass The class name of the SEI (Service Endpoint Interface) class which could have JSR181 annotation or not. It also supports many child elements: Name Value cxf:inInterceptors The incoming interceptors for this endpoint. A list of <bean> or <ref> . cxf:inFaultInterceptors The incoming fault interceptors for this endpoint. A list of <bean> or <ref> . cxf:outInterceptors The outgoing interceptors for this endpoint. A list of <bean> or <ref> . cxf:outFaultInterceptors The outgoing fault interceptors for this endpoint. A list of <bean> or <ref> . cxf:properties A properties map which should be supplied to the JAX-WS endpoint. See below. cxf:handlers A JAX-WS handler list which should be supplied to the JAX-WS endpoint. See below. cxf:dataBinding You can specify the which DataBinding will be use in the endpoint. This can be supplied using the Spring <bean class="MyDataBinding"/> syntax. cxf:binding You can specify the BindingFactory for this endpoint to use. This can be supplied using the Spring <bean class="MyBindingFactory"/> syntax. cxf:features The features that hold the interceptors for this endpoint. A list of beans or refs cxf:schemaLocations The schema locations for endpoint to use. A list of schemaLocations cxf:serviceFactory The service factory for this endpoint to use. This can be supplied using the Spring <bean class="MyServiceFactory"/> syntax You can find more advanced examples that show how to provide interceptors, properties and handlers on the CXF JAX-WS Configuration page . Note You can use cxf:properties to set the camel-cxf endpoint's dataFormat and setDefaultBus properties from spring configuration file. <cxf:cxfEndpoint id="testEndpoint" address="http://localhost:9000/router" serviceClass="org.apache.camel.component.cxf.HelloService" endpointName="s:PortName" serviceName="s:ServiceName" xmlns:s="http://www.example.com/test"> <cxf:properties> <entry key="dataFormat" value="RAW"/> <entry key="setDefaultBus" value="true"/> </cxf:properties> </cxf:cxfEndpoint> Note In SpringBoot, you can use Spring XML files to configure camel-cxf and use code similar to the following example to create XML configured beans: @ImportResource({ "classpath:spring-configuration.xml" }) However, the use of Java code configured beans (as shown in other examples) is best practice in SpringBoot. 18.6. How to make the camel-cxf component use log4j instead of java.util.logging CXF's default logger is java.util.logging . If you want to change it to log4j, proceed as follows. Create a file, in the classpath, named META-INF/cxf/org.apache.cxf.logger . This file should contain the fully-qualified name of the class, org.apache.cxf.common.logging.Log4jLogger , with no comments, on a single line. 18.7. How to let camel-cxf response start with xml processing instruction If you are using some SOAP client such as PHP, you will get this kind of error, because CXF doesn't add the XML processing instruction <?xml version="1.0" encoding="utf-8"?> : To resolve this issue, you just need to tell StaxOutInterceptor to write the XML start document for you, as in the WriteXmlDeclarationInterceptor below: public class WriteXmlDeclarationInterceptor extends AbstractPhaseInterceptor<SoapMessage> { public WriteXmlDeclarationInterceptor() { super(Phase.PRE_STREAM); addBefore(StaxOutInterceptor.class.getName()); } public void handleMessage(SoapMessage message) throws Fault { message.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); } } As an alternative you can add a message header for it as demonstrated in CxfConsumerTest : // set up the response context which force start document Map<String, Object> map = new HashMap<String, Object>(); map.put("org.apache.cxf.stax.force-start-document", Boolean.TRUE); exchange.getOut().setHeader(Client.RESPONSE_CONTEXT, map); 18.8. How to override the CXF producer address from message header The camel-cxf producer supports to override the target service address by setting a message header CamelDestinationOverrideUrl . // set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress())); 18.9. How to consume a message from a camel-cxf endpoint in POJO data format The camel-cxf endpoint consumer POJO data format is based on the CXF invoker , so the message header has a property with the name of CxfConstants.OPERATION_NAME and the message body is a list of the SEI method parameters. Consider the PersonProcessor example code: public class PersonProcessor implements Processor { private static final Logger LOG = LoggerFactory.getLogger(PersonProcessor.class); @Override @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { LOG.info("processing exchange in camel"); BindingOperationInfo boi = (BindingOperationInfo) exchange.getProperty(BindingOperationInfo.class.getName()); if (boi != null) { LOG.info("boi.isUnwrapped" + boi.isUnwrapped()); } // Get the parameters list which element is the holder. MessageContentsList msgList = (MessageContentsList) exchange.getIn().getBody(); Holder<String> personId = (Holder<String>) msgList.get(0); Holder<String> ssn = (Holder<String>) msgList.get(1); Holder<String> name = (Holder<String>) msgList.get(2); if (personId.value == null || personId.value.length() == 0) { LOG.info("person id 123, so throwing exception"); // Try to throw out the soap fault message org.apache.camel.wsdl_first.types.UnknownPersonFault personFault = new org.apache.camel.wsdl_first.types.UnknownPersonFault(); personFault.setPersonId(""); org.apache.camel.wsdl_first.UnknownPersonFault fault = new org.apache.camel.wsdl_first.UnknownPersonFault("Get the null value of person name", personFault); exchange.getMessage().setBody(fault); return; } name.value = "Bonjour"; ssn.value = "123"; LOG.info("setting Bonjour as the response"); // Set the response message, first element is the return value of the operation, // the others are the holders of method parameters exchange.getMessage().setBody(new Object[] { null, personId, ssn, name }); } } 18.10. How to prepare the message for the camel-cxf endpoint in POJO data format The camel-cxf endpoint producer is based on the CXF client API . First you need to specify the operation name in the message header, then add the method parameters to a list, and initialize the message with this parameter list. The response message's body is a messageContentsList, you can get the result from that list. If you don't specify the operation name in the message header, CxfProducer will try to use the defaultOperationName from CxfEndpoint , if there is no defaultOperationName set on CxfEndpoint , it will pick up the first operationName from the Operation list. If you want to get the object array from the message body, you can get the body using message.getBody(Object[].class) , as shown in CxfProducerRouterTest.testInvokingSimpleServerWithParams : Exchange senderExchange = new DefaultExchange(context, ExchangePattern.InOut); final List<String> params = new ArrayList<>(); // Prepare the request message for the camel-cxf procedure params.add(TEST_MESSAGE); senderExchange.getIn().setBody(params); senderExchange.getIn().setHeader(CxfConstants.OPERATION_NAME, ECHO_OPERATION); Exchange exchange = template.send("direct:EndpointA", senderExchange); org.apache.camel.Message out = exchange.getMessage(); // The response message's body is an MessageContentsList which first element is the return value of the operation, // If there are some holder parameters, the holder parameter will be filled in the reset of List. // The result will be extract from the MessageContentsList with the String class type MessageContentsList result = (MessageContentsList) out.getBody(); LOG.info("Received output text: " + result.get(0)); Map<String, Object> responseContext = CastUtils.cast((Map<?, ?>) out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals("UTF-8", responseContext.get(org.apache.cxf.message.Message.ENCODING), "We should get the response context here"); assertEquals("echo " + TEST_MESSAGE, result.get(0), "Reply body on Camel is wrong"); 18.11. How to deal with the message for a camel-cxf endpoint in PAYLOAD data format PAYLOAD means that you process the payload from the SOAP envelope as a native CxfPayload. Message.getBody() will return a org.apache.camel.component.cxf.CxfPayload object, with getters for SOAP message headers and the SOAP body. See CxfConsumerPayloadTest : protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(simpleEndpointURI + "&dataFormat=PAYLOAD").to("log:info").process(new Processor() { @SuppressWarnings("unchecked") public void process(final Exchange exchange) throws Exception { CxfPayload<SoapHeader> requestPayload = exchange.getIn().getBody(CxfPayload.class); List<Source> inElements = requestPayload.getBodySources(); List<Source> outElements = new ArrayList<>(); // You can use a customer toStringConverter to turn a CxfPayLoad message into String as you want String request = exchange.getIn().getBody(String.class); XmlConverter converter = new XmlConverter(); String documentString = ECHO_RESPONSE; Element in = new XmlConverter().toDOMElement(inElements.get(0)); // Just check the element namespace if (!in.getNamespaceURI().equals(ELEMENT_NAMESPACE)) { throw new IllegalArgumentException("Wrong element namespace"); } if (in.getLocalName().equals("echoBoolean")) { documentString = ECHO_BOOLEAN_RESPONSE; checkRequest("ECHO_BOOLEAN_REQUEST", request); } else { documentString = ECHO_RESPONSE; checkRequest("ECHO_REQUEST", request); } Document outDocument = converter.toDOMDocument(documentString, exchange); outElements.add(new DOMSource(outDocument.getDocumentElement())); // set the payload header with null CxfPayload<SoapHeader> responsePayload = new CxfPayload<>(null, outElements, null); exchange.getMessage().setBody(responsePayload); } }); } }; } 18.12. How to get and set SOAP headers in POJO mode POJO means that the data format is a "list of Java objects" when the camel-cxf endpoint produces or consumes Camel exchanges. Even though Camel exposes the message body as POJOs in this mode, camel-cxf still provides access to read and write SOAP headers. However, since CXF interceptors remove in-band SOAP headers from the header list after they have been processed, only out-of-band SOAP headers are available to camel-cxf in POJO mode. The following example illustrates how to get/set SOAP headers. Suppose we have a route that forwards from one Camel-cxf endpoint to another. That is, SOAP Client Camel CXF service. We can attach two processors to obtain/insert SOAP headers at (1) before a request goes out to the CXF service and (2) before the response comes back to the SOAP Client. Processor (1) and (2) in this example are InsertRequestOutHeaderProcessor and InsertResponseOutHeaderProcessor. Our route looks like this: from("cxf:bean:routerRelayEndpointWithInsertion") .process(new InsertRequestOutHeaderProcessor()) .to("cxf:bean:serviceRelayEndpointWithInsertion") .process(new InsertResponseOutHeaderProcessor()); The Bean routerRelayEndpointWithInsertion and serviceRelayEndpointWithInsertion are defined as follows: @Bean public CxfEndpoint routerRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertion"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } @Bean public CxfEndpoint serviceRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress("http://localhost:" + port + "/services/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertionBackend"); cxfEndpoint.setWsdlURL("soap_header.wsdl"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf("{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } SOAP headers are propagated to and from Camel Message headers. The Camel message header name is "org.apache.cxf.headers.Header.list" which is a constant defined in CXF (org.apache.cxf.headers.Header.HEADER_LIST). The header value is a List of CXF SoapHeader objects (org.apache.cxf.binding.soap.SoapHeader). The following snippet is the InsertResponseOutHeaderProcessor (that insert a new SOAP header in the response message). The way to access SOAP headers in both InsertResponseOutHeaderProcessor and InsertRequestOutHeaderProcessor are actually the same. The only difference between the two processors is setting the direction of the inserted SOAP header. You can find the InsertResponseOutHeaderProcessor example in CxfMessageHeadersRelayTest : public static class InsertResponseOutHeaderProcessor implements Processor { public void process(Exchange exchange) throws Exception { List<SoapHeader> soapHeaders = CastUtils.cast((List<?>)exchange.getIn().getHeader(Header.HEADER_LIST)); // Insert a new header String xml = "<?xml version=\"1.0\" encoding=\"utf-8\"?><outofbandHeader " + "xmlns=\"http://cxf.apache.org/outofband/Header\" hdrAttribute=\"testHdrAttribute\" " + "xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\" soap:mustUnderstand=\"1\">" + "<name>New_testOobHeader</name><value>New_testOobHeaderValue</value></outofbandHeader>"; SoapHeader newHeader = new SoapHeader(soapHeaders.get(0).getName(), DOMUtils.readXml(new StringReader(xml)).getDocumentElement()); // make sure direction is OUT since it is a response message. newHeader.setDirection(Direction.DIRECTION_OUT); //newHeader.setMustUnderstand(false); soapHeaders.add(newHeader); } } 18.13. How to get and set SOAP headers in PAYLOAD mode We've already shown how to access the SOAP message as CxfPayload object in PAYLOAD mode inm the section How to deal with the message for a camel-cxf endpoint in PAYLOAD data format . Once you obtain a CxfPayload object, you can invoke the CxfPayload.getHeaders() method that returns a List of DOM Elements (SOAP headers). For an example see CxfPayLoadSoapHeaderTest : from(getRouterEndpointURI()).process(new Processor() { @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> payload = exchange.getIn().getBody(CxfPayload.class); List<Source> elements = payload.getBodySources(); assertNotNull(elements, "We should get the elements here"); assertEquals(1, elements.size(), "Get the wrong elements size"); Element el = new XmlConverter().toDOMElement(elements.get(0)); elements.set(0, new DOMSource(el)); assertEquals("http://camel.apache.org/pizza/types", el.getNamespaceURI(), "Get the wrong namespace URI"); List<SoapHeader> headers = payload.getHeaders(); assertNotNull(headers, "We should get the headers here"); assertEquals(1, headers.size(), "Get the wrong headers size"); assertEquals("http://camel.apache.org/pizza/types", ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); // alternatively you can also get the SOAP header via the camel header: headers = exchange.getIn().getHeader(Header.HEADER_LIST, List.class); assertNotNull(headers, "We should get the headers here"); assertEquals(1, headers.size(), "Get the wrong headers size"); assertEquals("http://camel.apache.org/pizza/types", ((Element) (headers.get(0).getObject())).getNamespaceURI(), "Get the wrong namespace URI"); } }) .to(getServiceEndpointURI()); You can also use the same way as described in sub-chapter "How to get and set SOAP headers in POJO mode" to set or get the SOAP headers. So, you can use the header "org.apache.cxf.headers.Header.list" to get and set a list of SOAP headers.This does also mean that if you have a route that forwards from one Camel-cxf endpoint to another (SOAP Client Camel CXF service), now also the SOAP headers sent by the SOAP client are forwarded to the CXF service. If you do not want that these headers are forwarded you have to remove them in the Camel header "org.apache.cxf.headers.Header.list". 18.14. SOAP headers are not available in RAW mode SOAP headers are not available in RAW mode as SOAP processing is skipped. 18.15. How to throw a SOAP Fault from Camel If you are using a camel-cxf endpoint to consume the SOAP request, you may need to throw the SOAP Fault from the camel context. Basically, you can use the throwFault DSL to do that; it works for POJO , PAYLOAD and MESSAGE data format. You can define the soap fault as shown in CxfCustomizedExceptionTest : SOAP_FAULT = new SoapFault(EXCEPTION_MESSAGE, SoapFault.FAULT_CODE_CLIENT); Element detail = SOAP_FAULT.getOrCreateDetail(); Document doc = detail.getOwnerDocument(); Text tn = doc.createTextNode(DETAIL_TEXT); detail.appendChild(tn); Then throw it as you like from(routerEndpointURI).setFaultBody(constant(SOAP_FAULT)); If your CXF endpoint is working in the MESSAGE data format, you could set the SOAP Fault message in the message body and set the response code in the message header as demonstrated by CxfMessageStreamExceptionTest from(routerEndpointURI).process(new Processor() { public void process(Exchange exchange) throws Exception { Message out = exchange.getOut(); // Set the message body with the out.setBody(this.getClass().getResourceAsStream("SoapFaultMessage.xml")); // Set the response code here out.setHeader(org.apache.cxf.message.Message.RESPONSE_CODE, new Integer(500)); } }); Same for using POJO data format. You can set the SOAPFault on the out body. 18.16. How to propagate a camel-cxf endpoint's request and response context CXF client API provides a way to invoke the operation with request and response context. If you are using a camel-cxf endpoint producer to invoke the outside web service, you can set the request context and get response context with the following code: CxfExchange exchange = (CxfExchange)template.send(getJaxwsEndpointUri(), new Processor() { public void process(final Exchange exchange) { final List<String> params = new ArrayList<String>(); params.add(TEST_MESSAGE); // Set the request context to the inMessage Map<String, Object> requestContext = new HashMap<String, Object>(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, JAXWS_SERVER_ADDRESS); exchange.getIn().setBody(params); exchange.getIn().setHeader(Client.REQUEST_CONTEXT , requestContext); exchange.getIn().setHeader(CxfConstants.OPERATION_NAME, GREET_ME_OPERATION); } }); org.apache.camel.Message out = exchange.getOut(); // The output is an object array, the first element of the array is the return value Object\[\] output = out.getBody(Object\[\].class); LOG.info("Received output text: " + output\[0\]); // Get the response context form outMessage Map<String, Object> responseContext = CastUtils.cast((Map)out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals("Get the wrong wsdl operation name", "{http://apache.org/hello_world_soap_http}greetMe", responseContext.get("javax.xml.ws.wsdl.operation").toString()); 18.17. Attachment Support POJO Mode: Both SOAP with Attachment and MTOM are supported (see example in Payload Mode for enabling MTOM). However, SOAP with Attachment is not tested. Since attachments are marshalled and unmarshalled into POJOs, users typically do not need to deal with the attachment themself. Attachments are propagated to Camel message's attachments if the MTOM is not enabled. So, it is possible to retrieve attachments by Camel Message API DataHandler Message.getAttachment(String id) Payload Mode: MTOM is supported by the component. Attachments can be retrieved by Camel Message APIs mentioned above. SOAP with Attachment (SwA) is supported and attachments can be retrieved. SwA is the default (same as setting the CXF endpoint property "mtom-enabled" to false). To enable MTOM, set the CXF endpoint property "mtom-enabled" to true . @Bean public CxfEndpoint routerEndpoint() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceNameAsQName(SERVICE_QNAME); cxfEndpoint.setEndpointNameAsQName(PORT_QNAME); cxfEndpoint.setAddress("/" + getClass().getSimpleName()+ "/jaxws-mtom/hello"); cxfEndpoint.setWsdlURL("mtom.wsdl"); Map<String, Object> properties = new HashMap<String, Object>(); properties.put("dataFormat", "PAYLOAD"); properties.put("mtom-enabled", true); cxfEndpoint.setProperties(properties); return cxfEndpoint; } You can produce a Camel message with attachment to send to a CXF endpoint in Payload mode. Exchange exchange = context.createProducerTemplate().send("direct:testEndpoint", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.REQ_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> body = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getIn().setBody(body); exchange.getIn().addAttachment(MtomTestHelper.REQ_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.REQ_PHOTO_DATA, "application/octet-stream"))); exchange.getIn().addAttachment(MtomTestHelper.REQ_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.requestJpeg, "image/jpeg"))); } }); // process response CxfPayload<SoapHeader> out = exchange.getOut().getBody(CxfPayload.class); Assert.assertEquals(1, out.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); ns.put("xop", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element oute = new XmlConverter().toDOMElement(out.getBody().get(0)); Element ele = (Element)xu.getValue("//ns:DetailResponse/ns:photo/xop:Include", oute, XPathConstants.NODE); String photoId = ele.getAttribute("href").substring(4); // skip "cid:" ele = (Element)xu.getValue("//ns:DetailResponse/ns:image/xop:Include", oute, XPathConstants.NODE); String imageId = ele.getAttribute("href").substring(4); // skip "cid:" DataHandler dr = exchange.getOut().getAttachment(photoId); Assert.assertEquals("application/octet-stream", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.RESP_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getOut().getAttachment(imageId); Assert.assertEquals("image/jpeg", dr.getContentType()); BufferedImage image = ImageIO.read(dr.getInputStream()); Assert.assertEquals(560, image.getWidth()); Assert.assertEquals(300, image.getHeight()); You can also consume a Camel message received from a CXF endpoint in Payload mode. The CxfMtomConsumerPayloadModeTest illustrates how this works: public static class MyProcessor implements Processor { @SuppressWarnings("unchecked") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> in = exchange.getIn().getBody(CxfPayload.class); // verify request Assert.assertEquals(1, in.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put("ns", MtomTestHelper.SERVICE_TYPES_NS); ns.put("xop", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element body = new XmlConverter().toDOMElement(in.getBody().get(0)); Element ele = (Element)xu.getValue("//ns:Detail/ns:photo/xop:Include", body, XPathConstants.NODE); String photoId = ele.getAttribute("href").substring(4); // skip "cid:" Assert.assertEquals(MtomTestHelper.REQ_PHOTO_CID, photoId); ele = (Element)xu.getValue("//ns:Detail/ns:image/xop:Include", body, XPathConstants.NODE); String imageId = ele.getAttribute("href").substring(4); // skip "cid:" Assert.assertEquals(MtomTestHelper.REQ_IMAGE_CID, imageId); DataHandler dr = exchange.getIn().getAttachment(photoId); Assert.assertEquals("application/octet-stream", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.REQ_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getIn().getAttachment(imageId); Assert.assertEquals("image/jpeg", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.requestJpeg, IOUtils.readBytesFromStream(dr.getInputStream())); // create response List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.RESP_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> sbody = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getOut().setBody(sbody); exchange.getOut().addAttachment(MtomTestHelper.RESP_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.RESP_PHOTO_DATA, "application/octet-stream"))); exchange.getOut().addAttachment(MtomTestHelper.RESP_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.responseJpeg, "image/jpeg"))); } } Raw Mode: Attachments are not supported as it does not process the message at all. CXF_RAW Mode : MTOM is supported, and Attachments can be retrieved by Camel Message APIs mentioned above. Note that when receiving a multipart (i.e. MTOM) message the default SOAPMessage to String converter will provide the complete multipart payload on the body. If you require just the SOAP XML as a String, you can set the message body with message.getSOAPPart(), and Camel convert can do the rest of work for you. 18.18. Streaming Support in PAYLOAD mode The camel-cxf component now supports streaming of incoming messages when using PAYLOAD mode. Previously, the incoming messages would have been completely DOM parsed. For large messages, this is time consuming and uses a significant amount of memory. The incoming messages can remain as a javax.xml.transform.Source while being routed and, if nothing modifies the payload, can then be directly streamed out to the target destination. For common "simple proxy" use cases (example: from("cxf:... ").to("cxf:... ")), this can provide very significant performance increases as well as significantly lowered memory requirements. However, there are cases where streaming may not be appropriate or desired. Due to the streaming nature, invalid incoming XML may not be caught until later in the processing chain. Also, certain actions may require the message to be DOM parsed anyway (like WS-Security or message tracing and such) in which case the advantages of the streaming is limited. At this point, there are two ways to control the streaming: Endpoint property: you can add "allowStreaming=false" as an endpoint property to turn the streaming on/off. Component property: the CxfComponent object also has an allowStreaming property that can set the default for endpoints created from that component. Global system property: you can add a system property of "org.apache.camel.component.cxf.streaming" to "false" to turn it off. That sets the global default, but setting the endpoint property above will override this value for that endpoint. 18.19. Using the generic CXF Dispatch mode The camel-cxf component supports the generic CXF dispatch mode that can transport messages of arbitrary structures (i.e., not bound to a specific XML schema). To use this mode, you simply omit specifying the wsdlURL and serviceClass attributes of the CXF endpoint. <cxf:cxfEndpoint id="testEndpoint" address="http://localhost:9000/SoapContext/SoapAnyPort"> <cxf:properties> <entry key="dataFormat" value="PAYLOAD"/> </cxf:properties> </cxf:cxfEndpoint> It is noted that the default CXF dispatch client does not send a specific SOAPAction header. Therefore, when the target service requires a specific SOAPAction value, it is supplied in the Camel header using the key SOAPAction (case-insensitive). 18.20. Spring Boot Auto-Configuration When using cxf with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-soap-starter</artifactId> </dependency> The component supports 13 options, which are listed below. Name Description Default Type camel.component.cxf.allow-streaming This option controls whether the CXF component, when running in PAYLOAD mode, will DOM parse the incoming messages into DOM Elements or keep the payload as a javax.xml.transform.Source object that would allow streaming in some cases. Boolean camel.component.cxf.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cxf.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cxf.enabled Whether to enable auto configuration of the cxf component. This is enabled by default. Boolean camel.component.cxf.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.cxf.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.cxf.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean camel.component.cxfrs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.cxfrs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.cxfrs.enabled Whether to enable auto configuration of the cxfrs component. This is enabled by default. Boolean camel.component.cxfrs.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.cxfrs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.cxfrs.use-global-ssl-context-parameters Enable usage of global SSL context parameters. false Boolean
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cxf-soap</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>", "cxf:bean:cxfEndpoint[?options]", "cxf://someAddress[?options]", "cxf:bean:cxfEndpoint?wsdlURL=wsdl/hello_world.wsdl&dataFormat=PAYLOAD", "cxf:beanId:address", "@Bean public CxfEndpoint serviceEndpoint(LoggingOutInterceptor loggingOutInterceptor) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services\" + SERVICE_ADDRESS); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.HelloService.class); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"RAW\"); cxfEndpoint.setProperties(properties); cxfEndpoint.getOutInterceptors().add(loggingOutInterceptor); return cxfEndpoint; } @Bean public LoggingOutInterceptor loggingOutInterceptor() { LoggingOutInterceptor logger = new LoggingOutInterceptor(\"write\"); return logger; }", "<cxf:cxfEndpoint ...> <cxf:properties> <entry key=\"org.apache.camel.cxf.message.headers.relays\"> <list> <ref bean=\"customHeadersRelay\"/> </list> </entry> </cxf:properties> </cxf:cxfEndpoint> <bean id=\"customHeadersRelay\" class=\"org.apache.camel.component.cxf.soap.headers.CustomHeadersRelay\"/>", "@Bean public HeaderFilterStrategy dropAllMessageHeadersStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); headerFilterStrategy.setRelayHeaders(false); return headerFilterStrategy; }", "@Bean public CxfEndpoint routerNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpoint\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; } @Bean public CxfEndpoint serviceNoRelayEndpoint(HeaderFilterStrategy dropAllMessageHeadersStrategy) { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services/CxfMessageHeadersRelayTest/HeaderService/routerNoRelayEndpointBackend\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortNoRelay\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); cxfEndpoint.setProperties(properties); cxfEndpoint.setHeaderFilterStrategy(dropAllMessageHeadersStrategy); return cxfEndpoint; }", "rom(\"cxf:bean:routerNoRelayEndpoint\") .to(\"cxf:bean:serviceNoRelayEndpoint\");", "@Bean public HeaderFilterStrategy customMessageFilterStrategy() { CxfHeaderFilterStrategy headerFilterStrategy = new CxfHeaderFilterStrategy(); List<MessageHeaderFilter> headerFilterList = new ArrayList<MessageHeaderFilter>(); headerFilterList.add(new SoapMessageHeaderFilter()); headerFilterList.add(new CustomHeaderFilter()); headerFilterStrategy.setMessageHeaderFilters(headerFilterList); return headerFilterStrategy; }", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cxf=\"http://camel.apache.org/schema/cxf/jaxws\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <cxf:cxfEndpoint id=\"routerEndpoint\" address=\"http://localhost:9003/CamelContext/RouterPort\" serviceClass=\"org.apache.hello_world_soap_http.GreeterImpl\"/> <cxf:cxfEndpoint id=\"serviceEndpoint\" address=\"http://localhost:9000/SoapContext/SoapPort\" wsdlURL=\"testutils/hello_world.wsdl\" serviceClass=\"org.apache.hello_world_soap_http.Greeter\" endpointName=\"s:SoapPort\" serviceName=\"s:SOAPService\" xmlns:s=\"http://apache.org/hello_world_soap_http\" /> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"cxf:bean:routerEndpoint\" /> <to uri=\"cxf:bean:serviceEndpoint\" /> </route> </camelContext> </beans>", "<cxf:cxfEndpoint id=\"testEndpoint\" address=\"http://localhost:9000/router\" serviceClass=\"org.apache.camel.component.cxf.HelloService\" endpointName=\"s:PortName\" serviceName=\"s:ServiceName\" xmlns:s=\"http://www.example.com/test\"> <cxf:properties> <entry key=\"dataFormat\" value=\"RAW\"/> <entry key=\"setDefaultBus\" value=\"true\"/> </cxf:properties> </cxf:cxfEndpoint>", "@ImportResource({ \"classpath:spring-configuration.xml\" })", "Error:sendSms: SoapFault exception: [Client] looks like we got no XML document in [...]", "public class WriteXmlDeclarationInterceptor extends AbstractPhaseInterceptor<SoapMessage> { public WriteXmlDeclarationInterceptor() { super(Phase.PRE_STREAM); addBefore(StaxOutInterceptor.class.getName()); } public void handleMessage(SoapMessage message) throws Fault { message.put(\"org.apache.cxf.stax.force-start-document\", Boolean.TRUE); } }", "// set up the response context which force start document Map<String, Object> map = new HashMap<String, Object>(); map.put(\"org.apache.cxf.stax.force-start-document\", Boolean.TRUE); exchange.getOut().setHeader(Client.RESPONSE_CONTEXT, map);", "// set up the service address from the message header to override the setting of CXF endpoint exchange.getIn().setHeader(Exchange.DESTINATION_OVERRIDE_URL, constant(getServiceAddress()));", "public class PersonProcessor implements Processor { private static final Logger LOG = LoggerFactory.getLogger(PersonProcessor.class); @Override @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { LOG.info(\"processing exchange in camel\"); BindingOperationInfo boi = (BindingOperationInfo) exchange.getProperty(BindingOperationInfo.class.getName()); if (boi != null) { LOG.info(\"boi.isUnwrapped\" + boi.isUnwrapped()); } // Get the parameters list which element is the holder. MessageContentsList msgList = (MessageContentsList) exchange.getIn().getBody(); Holder<String> personId = (Holder<String>) msgList.get(0); Holder<String> ssn = (Holder<String>) msgList.get(1); Holder<String> name = (Holder<String>) msgList.get(2); if (personId.value == null || personId.value.length() == 0) { LOG.info(\"person id 123, so throwing exception\"); // Try to throw out the soap fault message org.apache.camel.wsdl_first.types.UnknownPersonFault personFault = new org.apache.camel.wsdl_first.types.UnknownPersonFault(); personFault.setPersonId(\"\"); org.apache.camel.wsdl_first.UnknownPersonFault fault = new org.apache.camel.wsdl_first.UnknownPersonFault(\"Get the null value of person name\", personFault); exchange.getMessage().setBody(fault); return; } name.value = \"Bonjour\"; ssn.value = \"123\"; LOG.info(\"setting Bonjour as the response\"); // Set the response message, first element is the return value of the operation, // the others are the holders of method parameters exchange.getMessage().setBody(new Object[] { null, personId, ssn, name }); } }", "Exchange senderExchange = new DefaultExchange(context, ExchangePattern.InOut); final List<String> params = new ArrayList<>(); // Prepare the request message for the camel-cxf procedure params.add(TEST_MESSAGE); senderExchange.getIn().setBody(params); senderExchange.getIn().setHeader(CxfConstants.OPERATION_NAME, ECHO_OPERATION); Exchange exchange = template.send(\"direct:EndpointA\", senderExchange); org.apache.camel.Message out = exchange.getMessage(); // The response message's body is an MessageContentsList which first element is the return value of the operation, // If there are some holder parameters, the holder parameter will be filled in the reset of List. // The result will be extract from the MessageContentsList with the String class type MessageContentsList result = (MessageContentsList) out.getBody(); LOG.info(\"Received output text: \" + result.get(0)); Map<String, Object> responseContext = CastUtils.cast((Map<?, ?>) out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals(\"UTF-8\", responseContext.get(org.apache.cxf.message.Message.ENCODING), \"We should get the response context here\"); assertEquals(\"echo \" + TEST_MESSAGE, result.get(0), \"Reply body on Camel is wrong\");", "protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(simpleEndpointURI + \"&dataFormat=PAYLOAD\").to(\"log:info\").process(new Processor() { @SuppressWarnings(\"unchecked\") public void process(final Exchange exchange) throws Exception { CxfPayload<SoapHeader> requestPayload = exchange.getIn().getBody(CxfPayload.class); List<Source> inElements = requestPayload.getBodySources(); List<Source> outElements = new ArrayList<>(); // You can use a customer toStringConverter to turn a CxfPayLoad message into String as you want String request = exchange.getIn().getBody(String.class); XmlConverter converter = new XmlConverter(); String documentString = ECHO_RESPONSE; Element in = new XmlConverter().toDOMElement(inElements.get(0)); // Just check the element namespace if (!in.getNamespaceURI().equals(ELEMENT_NAMESPACE)) { throw new IllegalArgumentException(\"Wrong element namespace\"); } if (in.getLocalName().equals(\"echoBoolean\")) { documentString = ECHO_BOOLEAN_RESPONSE; checkRequest(\"ECHO_BOOLEAN_REQUEST\", request); } else { documentString = ECHO_RESPONSE; checkRequest(\"ECHO_REQUEST\", request); } Document outDocument = converter.toDOMDocument(documentString, exchange); outElements.add(new DOMSource(outDocument.getDocumentElement())); // set the payload header with null CxfPayload<SoapHeader> responsePayload = new CxfPayload<>(null, outElements, null); exchange.getMessage().setBody(responsePayload); } }); } }; }", "from(\"cxf:bean:routerRelayEndpointWithInsertion\") .process(new InsertRequestOutHeaderProcessor()) .to(\"cxf:bean:serviceRelayEndpointWithInsertion\") .process(new InsertResponseOutHeaderProcessor());", "@Bean public CxfEndpoint routerRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertion\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; } @Bean public CxfEndpoint serviceRelayEndpointWithInsertion() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceClass(org.apache.camel.component.cxf.soap.headers.HeaderTester.class); cxfEndpoint.setAddress(\"http://localhost:\" + port + \"/services/CxfMessageHeadersRelayTest/HeaderService/routerRelayEndpointWithInsertionBackend\"); cxfEndpoint.setWsdlURL(\"soap_header.wsdl\"); cxfEndpoint.setEndpointNameAsQName( QName.valueOf(\"{http://apache.org/camel/component/cxf/soap/headers}SoapPortRelayWithInsertion\")); cxfEndpoint.setServiceNameAsQName(SERVICENAME); cxfEndpoint.getFeatures().add(new LoggingFeature()); return cxfEndpoint; }", "public static class InsertResponseOutHeaderProcessor implements Processor { public void process(Exchange exchange) throws Exception { List<SoapHeader> soapHeaders = CastUtils.cast((List<?>)exchange.getIn().getHeader(Header.HEADER_LIST)); // Insert a new header String xml = \"<?xml version=\\\"1.0\\\" encoding=\\\"utf-8\\\"?><outofbandHeader \" + \"xmlns=\\\"http://cxf.apache.org/outofband/Header\\\" hdrAttribute=\\\"testHdrAttribute\\\" \" + \"xmlns:soap=\\\"http://schemas.xmlsoap.org/soap/envelope/\\\" soap:mustUnderstand=\\\"1\\\">\" + \"<name>New_testOobHeader</name><value>New_testOobHeaderValue</value></outofbandHeader>\"; SoapHeader newHeader = new SoapHeader(soapHeaders.get(0).getName(), DOMUtils.readXml(new StringReader(xml)).getDocumentElement()); // make sure direction is OUT since it is a response message. newHeader.setDirection(Direction.DIRECTION_OUT); //newHeader.setMustUnderstand(false); soapHeaders.add(newHeader); } }", "from(getRouterEndpointURI()).process(new Processor() { @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> payload = exchange.getIn().getBody(CxfPayload.class); List<Source> elements = payload.getBodySources(); assertNotNull(elements, \"We should get the elements here\"); assertEquals(1, elements.size(), \"Get the wrong elements size\"); Element el = new XmlConverter().toDOMElement(elements.get(0)); elements.set(0, new DOMSource(el)); assertEquals(\"http://camel.apache.org/pizza/types\", el.getNamespaceURI(), \"Get the wrong namespace URI\"); List<SoapHeader> headers = payload.getHeaders(); assertNotNull(headers, \"We should get the headers here\"); assertEquals(1, headers.size(), \"Get the wrong headers size\"); assertEquals(\"http://camel.apache.org/pizza/types\", ((Element) (headers.get(0).getObject())).getNamespaceURI(), \"Get the wrong namespace URI\"); // alternatively you can also get the SOAP header via the camel header: headers = exchange.getIn().getHeader(Header.HEADER_LIST, List.class); assertNotNull(headers, \"We should get the headers here\"); assertEquals(1, headers.size(), \"Get the wrong headers size\"); assertEquals(\"http://camel.apache.org/pizza/types\", ((Element) (headers.get(0).getObject())).getNamespaceURI(), \"Get the wrong namespace URI\"); } }) .to(getServiceEndpointURI());", "SOAP_FAULT = new SoapFault(EXCEPTION_MESSAGE, SoapFault.FAULT_CODE_CLIENT); Element detail = SOAP_FAULT.getOrCreateDetail(); Document doc = detail.getOwnerDocument(); Text tn = doc.createTextNode(DETAIL_TEXT); detail.appendChild(tn);", "from(routerEndpointURI).setFaultBody(constant(SOAP_FAULT));", "from(routerEndpointURI).process(new Processor() { public void process(Exchange exchange) throws Exception { Message out = exchange.getOut(); // Set the message body with the out.setBody(this.getClass().getResourceAsStream(\"SoapFaultMessage.xml\")); // Set the response code here out.setHeader(org.apache.cxf.message.Message.RESPONSE_CODE, new Integer(500)); } });", "CxfExchange exchange = (CxfExchange)template.send(getJaxwsEndpointUri(), new Processor() { public void process(final Exchange exchange) { final List<String> params = new ArrayList<String>(); params.add(TEST_MESSAGE); // Set the request context to the inMessage Map<String, Object> requestContext = new HashMap<String, Object>(); requestContext.put(BindingProvider.ENDPOINT_ADDRESS_PROPERTY, JAXWS_SERVER_ADDRESS); exchange.getIn().setBody(params); exchange.getIn().setHeader(Client.REQUEST_CONTEXT , requestContext); exchange.getIn().setHeader(CxfConstants.OPERATION_NAME, GREET_ME_OPERATION); } }); org.apache.camel.Message out = exchange.getOut(); // The output is an object array, the first element of the array is the return value Object\\[\\] output = out.getBody(Object\\[\\].class); LOG.info(\"Received output text: \" + output\\[0\\]); // Get the response context form outMessage Map<String, Object> responseContext = CastUtils.cast((Map)out.getHeader(Client.RESPONSE_CONTEXT)); assertNotNull(responseContext); assertEquals(\"Get the wrong wsdl operation name\", \"{http://apache.org/hello_world_soap_http}greetMe\", responseContext.get(\"javax.xml.ws.wsdl.operation\").toString());", "DataHandler Message.getAttachment(String id)", "@Bean public CxfEndpoint routerEndpoint() { CxfSpringEndpoint cxfEndpoint = new CxfSpringEndpoint(); cxfEndpoint.setServiceNameAsQName(SERVICE_QNAME); cxfEndpoint.setEndpointNameAsQName(PORT_QNAME); cxfEndpoint.setAddress(\"/\" + getClass().getSimpleName()+ \"/jaxws-mtom/hello\"); cxfEndpoint.setWsdlURL(\"mtom.wsdl\"); Map<String, Object> properties = new HashMap<String, Object>(); properties.put(\"dataFormat\", \"PAYLOAD\"); properties.put(\"mtom-enabled\", true); cxfEndpoint.setProperties(properties); return cxfEndpoint; }", "Exchange exchange = context.createProducerTemplate().send(\"direct:testEndpoint\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.setPattern(ExchangePattern.InOut); List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.REQ_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> body = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getIn().setBody(body); exchange.getIn().addAttachment(MtomTestHelper.REQ_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.REQ_PHOTO_DATA, \"application/octet-stream\"))); exchange.getIn().addAttachment(MtomTestHelper.REQ_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.requestJpeg, \"image/jpeg\"))); } }); // process response CxfPayload<SoapHeader> out = exchange.getOut().getBody(CxfPayload.class); Assert.assertEquals(1, out.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put(\"ns\", MtomTestHelper.SERVICE_TYPES_NS); ns.put(\"xop\", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element oute = new XmlConverter().toDOMElement(out.getBody().get(0)); Element ele = (Element)xu.getValue(\"//ns:DetailResponse/ns:photo/xop:Include\", oute, XPathConstants.NODE); String photoId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" ele = (Element)xu.getValue(\"//ns:DetailResponse/ns:image/xop:Include\", oute, XPathConstants.NODE); String imageId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" DataHandler dr = exchange.getOut().getAttachment(photoId); Assert.assertEquals(\"application/octet-stream\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.RESP_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getOut().getAttachment(imageId); Assert.assertEquals(\"image/jpeg\", dr.getContentType()); BufferedImage image = ImageIO.read(dr.getInputStream()); Assert.assertEquals(560, image.getWidth()); Assert.assertEquals(300, image.getHeight());", "public static class MyProcessor implements Processor { @SuppressWarnings(\"unchecked\") public void process(Exchange exchange) throws Exception { CxfPayload<SoapHeader> in = exchange.getIn().getBody(CxfPayload.class); // verify request Assert.assertEquals(1, in.getBody().size()); Map<String, String> ns = new HashMap<String, String>(); ns.put(\"ns\", MtomTestHelper.SERVICE_TYPES_NS); ns.put(\"xop\", MtomTestHelper.XOP_NS); XPathUtils xu = new XPathUtils(ns); Element body = new XmlConverter().toDOMElement(in.getBody().get(0)); Element ele = (Element)xu.getValue(\"//ns:Detail/ns:photo/xop:Include\", body, XPathConstants.NODE); String photoId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" Assert.assertEquals(MtomTestHelper.REQ_PHOTO_CID, photoId); ele = (Element)xu.getValue(\"//ns:Detail/ns:image/xop:Include\", body, XPathConstants.NODE); String imageId = ele.getAttribute(\"href\").substring(4); // skip \"cid:\" Assert.assertEquals(MtomTestHelper.REQ_IMAGE_CID, imageId); DataHandler dr = exchange.getIn().getAttachment(photoId); Assert.assertEquals(\"application/octet-stream\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.REQ_PHOTO_DATA, IOUtils.readBytesFromStream(dr.getInputStream())); dr = exchange.getIn().getAttachment(imageId); Assert.assertEquals(\"image/jpeg\", dr.getContentType()); MtomTestHelper.assertEquals(MtomTestHelper.requestJpeg, IOUtils.readBytesFromStream(dr.getInputStream())); // create response List<Source> elements = new ArrayList<Source>(); elements.add(new DOMSource(DOMUtils.readXml(new StringReader(MtomTestHelper.RESP_MESSAGE)).getDocumentElement())); CxfPayload<SoapHeader> sbody = new CxfPayload<SoapHeader>(new ArrayList<SoapHeader>(), elements, null); exchange.getOut().setBody(sbody); exchange.getOut().addAttachment(MtomTestHelper.RESP_PHOTO_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.RESP_PHOTO_DATA, \"application/octet-stream\"))); exchange.getOut().addAttachment(MtomTestHelper.RESP_IMAGE_CID, new DataHandler(new ByteArrayDataSource(MtomTestHelper.responseJpeg, \"image/jpeg\"))); } }", "<cxf:cxfEndpoint id=\"testEndpoint\" address=\"http://localhost:9000/SoapContext/SoapAnyPort\"> <cxf:properties> <entry key=\"dataFormat\" value=\"PAYLOAD\"/> </cxf:properties> </cxf:cxfEndpoint>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-cxf-soap-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-cxf-component-starter
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/installing_and_using_red_hat_build_of_openjdk_8_for_windows/making-open-source-more-inclusive
Chapter 1. OpenShift Container Platform installation overview
Chapter 1. OpenShift Container Platform installation overview 1.1. About OpenShift Container Platform installation The OpenShift Container Platform installation program offers four methods for deploying a cluster which are detailed in the following list: Interactive : You can deploy a cluster with the web-based Assisted Installer . This is an ideal approach for clusters with networks connected to the internet. The Assisted Installer is the easiest way to install OpenShift Container Platform, it provides smart defaults, and it performs pre-flight validations before installing the cluster. It also provides a RESTful API for automation and advanced configuration scenarios. Local Agent-based : You can deploy a cluster locally with the Agent-based Installer for disconnected environments or restricted networks. It provides many of the benefits of the Assisted Installer, but you must download and configure the Agent-based Installer first. Configuration is done with a command-line interface. This approach is ideal for disconnected environments. Automated : You can deploy a cluster on installer-provisioned infrastructure. The installation program uses each cluster host's baseboard management controller (BMC) for provisioning. You can deploy clusters in connected or disconnected environments. Full control : You can deploy a cluster on infrastructure that you prepare and maintain, which provides maximum customizability. You can deploy clusters in connected or disconnected environments. Each method deploys a cluster with the following characteristics: Highly available infrastructure with no single points of failure, which is available by default. Administrators can control what updates are applied and when. 1.1.1. About the installation program You can use the installation program to deploy each type of cluster. The installation program generates the main assets, such as Ignition config files for the bootstrap, control plane, and compute machines. You can start an OpenShift Container Platform cluster with these three machine configurations, provided you correctly configured the infrastructure. The OpenShift Container Platform installation program uses a set of targets and dependencies to manage cluster installations. The installation program has a set of targets that it must achieve, and each target has a set of dependencies. Because each target is only concerned with its own dependencies, the installation program can act to achieve multiple targets in parallel with the ultimate target being a running cluster. The installation program recognizes and uses existing components instead of running commands to create them again because the program meets the dependencies. Figure 1.1. OpenShift Container Platform installation targets and dependencies 1.1.2. About Red Hat Enterprise Linux CoreOS (RHCOS) Post-installation, each cluster machine uses Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. RHCOS is the immutable container host version of Red Hat Enterprise Linux (RHEL) and features a RHEL kernel with SELinux enabled by default. RHCOS includes the kubelet , which is the Kubernetes node agent, and the CRI-O container runtime, which is optimized for Kubernetes. Every control plane machine in an OpenShift Container Platform 4.15 cluster must use RHCOS, which includes a critical first-boot provisioning tool called Ignition. This tool enables the cluster to configure the machines. Operating system updates are delivered as a bootable container image, using OSTree as a backend, that is deployed across the cluster by the Machine Config Operator. Actual operating system changes are made in-place on each machine as an atomic operation by using rpm-ostree . Together, these technologies enable OpenShift Container Platform to manage the operating system like it manages any other application on the cluster, by in-place upgrades that keep the entire platform up to date. These in-place updates can reduce the burden on operations teams. If you use RHCOS as the operating system for all cluster machines, the cluster manages all aspects of its components and machines, including the operating system. Because of this, only the installation program and the Machine Config Operator can change machines. The installation program uses Ignition config files to set the exact state of each machine, and the Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. 1.1.3. Glossary of common terms for OpenShift Container Platform installing The glossary defines common terms that relate to the installation content. Read the following list of terms to better understand the installation process. Assisted Installer An installer hosted at console.redhat.com that provides a web-based user interface or a RESTful API for creating a cluster configuration. The Assisted Installer generates a discovery image. Cluster machines boot with the discovery image, which installs RHCOS and an agent. Together, the Assisted Installer and agent provide preinstallation validation and installation for the cluster. Agent-based Installer An installer similar to the Assisted Installer, but you must download the Agent-based Installer first. The Agent-based Installer is ideal for disconnected environments. Bootstrap node A temporary machine that runs a minimal Kubernetes configuration required to deploy the OpenShift Container Platform control plane. Control plane A container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. Also known as control plane machines. Compute node Nodes that are responsible for executing workloads for cluster users. Also known as worker nodes. Disconnected installation In some situations, parts of a data center might not have access to the internet, even through proxy servers. You can still install the OpenShift Container Platform in these environments, but you must download the required software and images and make them available to the disconnected environment. The OpenShift Container Platform installation program A program that provisions the infrastructure and deploys a cluster. Installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. Ignition config files A file that the Ignition tool uses to configure Red Hat Enterprise Linux CoreOS (RHCOS) during operating system initialization. The installation program generates different Ignition configuration files to initialize bootstrap, control plane, and worker nodes. Kubernetes manifests Specifications of a Kubernetes API object in a JSON or YAML format. A configuration file can include deployments, config maps, secrets, daemonsets, and so on. Kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Load balancers A load balancer serves as the single point of contact for clients. Load balancers for the API distribute incoming traffic across control plane nodes. Machine Config Operator An Operator that manages and applies configurations and updates of the base operating system and container runtime, including everything between the kernel and kubelet, for the nodes in the cluster. Operators The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An operator takes human operational knowledge and encodes it into software that is easily packaged and shared with customers. User-provisioned infrastructure You can install OpenShift Container Platform on infrastructure that you provide. You can use the installation program to generate the assets required to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. 1.1.4. Installation process Except for the Assisted Installer, when you install an OpenShift Container Platform cluster, you must download the installation program from the appropriate Cluster Type page on the OpenShift Cluster Manager Hybrid Cloud Console. This console manages: REST API for accounts. Registry tokens, which are the pull secrets that you use to obtain the required components. Cluster registration, which associates the cluster identity to your Red Hat account to facilitate the gathering of usage metrics. In OpenShift Container Platform 4.15, the installation program is a Go binary file that performs a series of file transformations on a set of assets. The way you interact with the installation program differs depending on your installation type. Consider the following installation use cases: To deploy a cluster with the Assisted Installer, you must configure the cluster settings by using the Assisted Installer . There is no installation program to download and configure. After you finish setting the cluster configuration, you download a discovery ISO and then boot cluster machines with that image. You can install clusters with the Assisted Installer on Nutanix, vSphere, and bare metal with full integration, and other platforms without integration. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. To deploy clusters with the Agent-based Installer, you can download the Agent-based Installer first. You can then configure the cluster and generate a discovery image. You boot cluster machines with the discovery image, which installs an agent that communicates with the installation program and handles the provisioning for you instead of you interacting with the installation program or setting up a provisioner machine yourself. You must provide all of the cluster infrastructure and resources, including the networking, load balancing, storage, and individual cluster machines. This approach is ideal for disconnected environments. For clusters with installer-provisioned infrastructure, you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster, except if you install on bare metal. If you install on bare metal, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. If you provision and manage the infrastructure for your cluster, you must provide all of the cluster infrastructure and resources, including the bootstrap machine, networking, load balancing, storage, and individual cluster machines. For the installation program, the program uses three sets of files during installation: an installation configuration file that is named install-config.yaml , Kubernetes manifests, and Ignition config files for your machine types. Important You can modify Kubernetes and the Ignition config files that control the underlying RHCOS operating system during installation. However, no validation is available to confirm the suitability of any modifications that you make to these objects. If you modify these objects, you might render your cluster non-functional. Because of this risk, modifying Kubernetes and Ignition config files is not supported unless you are following documented procedures or are instructed to do so by Red Hat support. The installation configuration file is transformed into Kubernetes manifests, and then the manifests are wrapped into Ignition config files. The installation program uses these Ignition config files to create the cluster. The installation configuration files are all pruned when you run the installation program, so be sure to back up all the configuration files that you want to use again. Important You cannot modify the parameters that you set during installation, but you can modify many cluster attributes after installation. The installation process with the Assisted Installer Installation with the Assisted Installer involves creating a cluster configuration interactively by using the web-based user interface or the RESTful API. The Assisted Installer user interface prompts you for required values and provides reasonable default values for the remaining parameters, unless you change them in the user interface or with the API. The Assisted Installer generates a discovery image, which you download and use to boot the cluster machines. The image installs RHCOS and an agent, and the agent handles the provisioning for you. You can install OpenShift Container Platform with the Assisted Installer and full integration on Nutanix, vSphere, and bare metal. Additionally, you can install OpenShift Container Platform with the Assisted Installer on other platforms without integration. OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. If possible, use the Assisted Installer feature to avoid having to download and configure the Agent-based Installer. The installation process with Agent-based infrastructure Agent-based installation is similar to using the Assisted Installer, except that you must initially download and install the Agent-based Installer . An Agent-based installation is useful when you want the convenience of the Assisted Installer, but you need to install a cluster in a disconnected environment. If possible, use the Agent-based installation feature to avoid having to create a provisioner machine with a bootstrap VM, and then provision and maintain the cluster infrastructure. The installation process with installer-provisioned infrastructure The default installation type uses installer-provisioned infrastructure. By default, the installation program acts as an installation wizard, prompting you for values that it cannot determine on its own and providing reasonable default values for the remaining parameters. You can also customize the installation process to support advanced infrastructure scenarios. The installation program provisions the underlying infrastructure for the cluster. You can install either a standard cluster or a customized cluster. With a standard cluster, you provide minimum details that are required to install the cluster. With a customized cluster, you can specify more details about the platform, such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. If possible, use this feature to avoid having to provision and maintain the cluster infrastructure. In all other environments, you use the installation program to generate the assets that you require to provision your cluster infrastructure. With installer-provisioned infrastructure clusters, OpenShift Container Platform manages all aspects of the cluster, including the operating system itself. Each machine boots with a configuration that references resources hosted in the cluster that it joins. This configuration allows the cluster to manage itself as updates are applied. The installation process with user-provisioned infrastructure You can also install OpenShift Container Platform on infrastructure that you provide. You use the installation program to generate the assets that you require to provision the cluster infrastructure, create the cluster infrastructure, and then deploy the cluster to the infrastructure that you provided. If you do not use infrastructure that the installation program provisioned, you must manage and maintain the cluster resources yourself. The following list details some of these self-managed resources: The underlying infrastructure for the control plane and compute machines that make up the cluster Load balancers Cluster networking, including the DNS records and required subnets Storage for the cluster infrastructure and applications If your cluster uses user-provisioned infrastructure, you have the option of adding RHEL compute machines to your cluster. Installation process details When a cluster is provisioned, each machine in the cluster requires information about the cluster. OpenShift Container Platform uses a temporary bootstrap machine during initial configuration to provide the required information to the permanent control plane. The temporary bootstrap machine boots by using an Ignition config file that describes how to create the cluster. The bootstrap machine creates the control plane machines that make up the control plane. The control plane machines then create the compute machines, which are also known as worker machines. The following figure illustrates this process: Figure 1.2. Creating the bootstrap, control plane, and compute machines After the cluster machines initialize, the bootstrap machine is destroyed. All clusters use the bootstrap process to initialize the cluster, but if you provision the infrastructure for your cluster, you must complete many of the steps manually. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. Consider using Ignition config files within 12 hours after they are generated, because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Bootstrapping a cluster involves the following steps: The bootstrap machine boots and starts hosting the remote resources required for the control plane machines to boot. If you provision the infrastructure, this step requires manual intervention. The bootstrap machine starts a single-node etcd cluster and a temporary Kubernetes control plane. The control plane machines fetch the remote resources from the bootstrap machine and finish booting. If you provision the infrastructure, this step requires manual intervention. The temporary control plane schedules the production control plane to the production control plane machines. The Cluster Version Operator (CVO) comes online and installs the etcd Operator. The etcd Operator scales up etcd on all control plane nodes. The temporary control plane shuts down and passes control to the production control plane. The bootstrap machine injects OpenShift Container Platform components into the production control plane. The installation program shuts down the bootstrap machine. If you provision the infrastructure, this step requires manual intervention. The control plane sets up the compute nodes. The control plane installs additional services in the form of a set of Operators. The result of this bootstrapping process is a running OpenShift Container Platform cluster. The cluster then downloads and configures remaining components needed for the day-to-day operations, including the creation of compute machines in supported environments. Additional resources Red Hat OpenShift Network Calculator 1.1.5. Verifying node state after installation The OpenShift Container Platform installation completes when the following installation health checks are successful: The provisioner can access the OpenShift Container Platform web console. All control plane nodes are ready. All cluster Operators are available. Note After the installation completes, the specific cluster Operators responsible for the worker nodes continuously attempt to provision all worker nodes. Some time is required before all worker nodes report as READY . For installations on bare metal, wait a minimum of 60 minutes before troubleshooting a worker node. For installations on all other platforms, wait a minimum of 40 minutes before troubleshooting a worker node. A DEGRADED state for the cluster Operators responsible for the worker nodes depends on the Operators' own resources and not on the state of the nodes. After your installation completes, you can continue to monitor the condition of the nodes in your cluster. Prerequisites The installation program resolves successfully in the terminal. Procedure Show the status of all worker nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a Show the phase of all worker machine nodes: USD oc get machines -A Example output NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m Additional resources Getting the BareMetalHost resource Following the progress of the installation Validating an installation Agent-based Installer Assisted Installer for OpenShift Container Platform Installation scope The scope of the OpenShift Container Platform installation program is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more configuration tasks after installation completes. Additional resources See Available cluster customizations for details about OpenShift Container Platform configuration resources. 1.1.6. OpenShift Local overview OpenShift Local supports rapid application development to get started building OpenShift Container Platform clusters. OpenShift Local is designed to run on a local computer to simplify setup and testing, and to emulate the cloud development environment locally with all of the tools needed to develop container-based applications. Regardless of the programming language you use, OpenShift Local hosts your application and brings a minimal, preconfigured Red Hat OpenShift Container Platform cluster to your local PC without the need for a server-based infrastructure. On a hosted environment, OpenShift Local can create microservices, convert them into images, and run them in Kubernetes-hosted containers directly on your laptop or desktop running Linux, macOS, or Windows 10 or later. For more information about OpenShift Local, see Red Hat OpenShift Local Overview . 1.2. Supported platforms for OpenShift Container Platform clusters In OpenShift Container Platform 4.15, you can install a cluster that uses installer-provisioned infrastructure on the following platforms: Alibaba Cloud Amazon Web Services (AWS) Bare metal Google Cloud Platform (GCP) IBM Cloud(R) Microsoft Azure Microsoft Azure Stack Hub Nutanix Red Hat OpenStack Platform (RHOSP) The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware vSphere For these clusters, all machines, including the computer that you run the installation process on, must have direct internet access to pull images for platform containers and provide telemetry data to Red Hat. Important After installation, the following changes are not supported: Mixing cloud provider platforms. Mixing cloud provider components. For example, using a persistent storage framework from a another platform on the platform where you installed the cluster. In OpenShift Container Platform 4.15, you can install a cluster that uses user-provisioned infrastructure on the following platforms: AWS Azure Azure Stack Hub Bare metal GCP IBM Power(R) IBM Z(R) or IBM(R) LinuxONE RHOSP The latest OpenShift Container Platform release supports both the latest RHOSP long-life release and intermediate release. For complete RHOSP release compatibility, see the OpenShift Container Platform on RHOSP support matrix . VMware Cloud on AWS VMware vSphere Depending on the supported cases for the platform, you can perform installations on user-provisioned infrastructure, so that you can run machines with full internet access, place your cluster behind a proxy, or perform a disconnected installation. In a disconnected installation, you can download the images that are required to install a cluster, place them in a mirror registry, and use that data to install your cluster. While you require internet access to pull images for platform containers, with a disconnected installation on vSphere or bare metal infrastructure, your cluster machines do not require direct internet access. The OpenShift Container Platform 4.x Tested Integrations page contains details about integration testing for different platforms. Additional resources See Supported installation methods for different platforms for more information about the types of installations that are available for each supported platform. See Selecting a cluster installation method and preparing it for users for information about choosing an installation method and preparing the required resources. Red Hat OpenShift Network Calculator can help you design your cluster network during both the deployment and expansion phases. It addresses common questions related to the cluster network and provides output in a convenient JSON format.
[ "oc get nodes", "NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a", "oc get machines -A", "NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installation_overview/ocp-installation-overview
Chapter 7. sVirt
Chapter 7. sVirt sVirt is a technology included in Red Hat Enterprise Linux 6 that integrates SELinux and virtualization. sVirt applies Mandatory Access Control (MAC) to improve security when using virtual machines. The main reasons for integrating these technologies are to improve security and harden the system against bugs in the hypervisor that might be used as an attack vector aimed toward the host or to another virtual machine. This chapter describes how sVirt integrates with virtualization technologies in Red Hat Enterprise Linux 6. Non-Virtualized Environment In a non-virtualized environment, hosts are separated from each other physically and each host has a self-contained environment, consisting of services such as a Web server, or a DNS server. These services communicate directly to their own user space, host kernel and physical host, offering their services directly to the network. The following image represents a non-virtualized environment: Virtualized Environment In a virtualized environment, several operating systems can be housed (as "guests") within a single host kernel and physical host. The following image represents a virtualized environment: 7.1. Security and Virtualization When services are not virtualized, machines are physically separated. Any exploit is usually contained to the affected machine, with the obvious exception of network attacks. When services are grouped together in a virtualized environment, extra vulnerabilities emerge in the system. If there is a security flaw in the hypervisor that can be exploited by a guest instance, this guest may be able to not only attack the host, but also other guests running on that host. This is not theoretical; attacks already exist on hypervisors. These attacks can extend beyond the guest instance and could expose other guests to attack. sVirt is an effort to isolate guests and limit their ability to launch further attacks if exploited. This is demonstrated in the following image, where an attack cannot break out of the virtual machine and extend to another host instance: SELinux introduces a pluggable security framework for virtualized instances in its implementation of Mandatory Access Control (MAC). The sVirt framework allows guests and their resources to be uniquely labeled. Once labeled, rules can be applied which can reject access between different guests.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/chap-security-enhanced_linux-svirt
Chapter 1. About Serverless
Chapter 1. About Serverless 1.1. OpenShift Serverless overview OpenShift Serverless provides Kubernetes native building blocks that enable developers to create and deploy serverless, event-driven applications on OpenShift Container Platform. OpenShift Serverless is based on the open source Knative project , which provides portability and consistency for hybrid and multi-cloud environments by enabling an enterprise-grade serverless platform. Note Because OpenShift Serverless releases on a different cadence from OpenShift Container Platform, the OpenShift Serverless documentation is now available as a separate documentation set at Red Hat OpenShift Serverless .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/serverless/about-serverless
Chapter 1. About OpenShift Virtualization
Chapter 1. About OpenShift Virtualization Learn about OpenShift Virtualization's capabilities and support scope. 1.1. What you can do with OpenShift Virtualization OpenShift Virtualization is an add-on to OpenShift Container Platform that allows you to run and manage virtual machine workloads alongside container workloads. OpenShift Virtualization adds new objects into your OpenShift Container Platform cluster by using Kubernetes custom resources to enable virtualization tasks. These tasks include: Creating and managing Linux and Windows virtual machines Connecting to virtual machines through a variety of consoles and CLI tools Importing and cloning existing virtual machines Managing network interface controllers and storage disks attached to virtual machines Live migrating virtual machines between nodes An enhanced web console provides a graphical portal to manage these virtualized resources alongside the OpenShift Container Platform cluster containers and infrastructure. OpenShift Virtualization is designed and tested to work well with Red Hat OpenShift Data Foundation features. Important When you deploy OpenShift Virtualization with OpenShift Data Foundation, you must create a dedicated storage class for Windows virtual machine disks. See Optimizing ODF PersistentVolumes for Windows VMs for details. You can use OpenShift Virtualization with the OVN-Kubernetes , OpenShift SDN , or one of the other certified default Container Network Interface (CNI) network providers listed in Certified OpenShift CNI Plugins . 1.1.1. OpenShift Virtualization supported cluster version OpenShift Virtualization 4.10 is supported for use on OpenShift Container Platform 4.10 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/virtualization/about-virt
3.10. Example: Activate Storage Domains
3.10. Example: Activate Storage Domains This example activates the data1 and iso1 storage domains for the Red Hat Virtualization Manager's use. Example 3.11. Activate data1 storage domain Request: cURL command: Example 3.12. Activate iso1 storage domain Request: cURL command: This activates both storage domains for use with the data center.
[ "POST /ovirt-engine/api/datacenters/d70d5e2d-b8ad-494a-a4d2-c7a5631073c4/storagedomains/ 9ca7cb40-9a2a-4513-acef-dc254af57aac/activate HTTP/1.1 Accept: application/xml Content-type: application/xml <action/>", "curl -X POST -H \"Accept: application/xml\" -H \"Content-Type: application/xml\" -u [USER:PASS] --cacert [CERT] -d \"<action/>\" https:// [RHEVM Host] :443/ovirt-engine/api/datacenters/d70d5e2d-b8ad-494a-a4d2-c7a5631073c4/storagedomains/9ca7cb40-9a2a-4513-acef-dc254af57aac/activate", "POST /ovirt-engine/api/datacenters/d70d5e2d-b8ad-494a-a4d2-c7a5631073c4/storagedomains/ 00f0d9ce-da15-4b9e-9e3e-3c898fa8b6da/activate HTTP/1.1 Accept: application/xml Content-type: application/xml <action/>", "curl -X POST -H \"Accept: application/xml\" -H \"Content-Type: application/xml\" -u [USER:PASS] --cacert [CERT] -d \"<action/>\" https:// [RHEVM Host] :443/ovirt-engine/api/datacenters/d70d5e2d-b8ad-494a-a4d2-c7a5631073c4/storagedomains/00f0d9ce-da15-4b9e-9e3e-3c898fa8b6da/activate" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/example_activate_storage_domains
Chapter 7. GitOps CLI for use with Red Hat OpenShift GitOps
Chapter 7. GitOps CLI for use with Red Hat OpenShift GitOps The GitOps argocd CLI is a tool for configuring and managing Red Hat OpenShift GitOps and Argo CD resources from a terminal. With the GitOps CLI, you can make GitOps computing tasks simple and concise. You can install this CLI tool on different platforms. 7.1. Installing the GitOps CLI See Installing the GitOps CLI . 7.2. Additional resources What is GitOps?
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/gitops-argocd-cli-tools
A.7. Loop Device Errors
A.7. Loop Device Errors If file-based guest images are used you may have to increase the number of configured loop devices. The default configuration allows up to eight active loop devices. If more than eight file-based guests or loop devices are needed the number of loop devices configured can be adjusted in the /etc/modprobe.d/ directory. Add the following line: This example uses 64 but you can specify another number to set the maximum loop value. You may also have to implement loop device backed guests on your system. To use a loop device backed guests for a full virtualized system, use the phy: device or file: file commands.
[ "options loop max_loop=64" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-troubleshooting-loop_device_errors
Chapter 6. Installing automation controller
Chapter 6. Installing automation controller With the installation of the Ansible Automation Platform operator completed, the following steps install an automation controller within a Red Hat OpenShift cluster. Note The resource requests and limits values are specific to this reference environment. Ensure to read the Chapter 3, Before you start section to properly calculate the values for your Red Hat OpenShift environment. Warning When an instance of automation controller is removed, the associated Persistent Volume Claims (PVCs) are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the deployment. It is recommended to remove old PVCs prior to deploying a new automation controller instance in the same namespace. The steps to remove deployment PVCs can be found within Appendix B, Delete existing PVCs from AAP installations . Log in to the Red Hat OpenShift web console using your cluster credentials. In the left-hand navigation menu, select Operators Installed Operators , select Ansible Automation Platform . Navigate to the Automation Controller tab, then click Create AutomationController . Within the Form view, provide a Name , e.g. my-automation-controller and select the Advanced configuration to expand the additional options. Within the Additional configuration , set the appropriate Resource Requirements for each container as calculated from the Before you Start section. Expand Web Container Resource Requirements Limits: CPU cores: 2000m, Memory: 1.5Gi Requests: CPU cores: 500m, Memory: 1.5Gi Expand Task Container Resource Requirements Limits: CPU cores: 4000m, Memory: 8Gi Requests: CPU cores: 1000m, Memory: 8Gi Expand EE Control Plane Container Resource Requirements Limits: CPU cores: 500m, Memory: 400Mi Requests: CPU cores: 100m, Memory: 400Mi Expand Redis Container Resource Requirements Limits: CPU cores: 500m, Memory: 1.5Gi Requests: CPU cores: 250m, Memory: 1.5Gi Expand PostgreSQL Container Resource Requirements Limits: CPU cores: 1000m, Memory: 1Gi Requests: CPU cores: 500m, Memory: 1Gi At the top of the Create AutomationController page, toggle the YAML view Within the spec: section add the extra_settings parameter to pass the AWX_CONTROL_NODE_TASK_IMPACT value calculated in the Chapter 3, Before you start section Within the YAML view , add the following to the spec section to add dedicated node for your control pod. Note Ensure to have your node label and taints to the appropriate dedicated worker node that shall run the control pods. Details to set can be found within Appendix C, Applying labels and taints to Red Hat OpenShift node . Click the Create button
[ "spec: extra_settings: - setting: AWX_CONTROL_NODE_TASK_IMPACT value: \"5\"", "spec: node_selector: | aap_node_type: control topology_spread_constraints: | - maxSkew: 1 topologyKey: \"kubernetes.io/hostname\" whenUnsatisfiable: \"ScheduleAnyway\" labelSelector: matchLabels: aap_node_type: control tolerations: | - key: \"dedicated\" operator: \"Equal\" value: \"AutomationController\" effect: \"NoSchedule\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_ansible_automation_platform_2_on_red_hat_openshift/install_controller
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_user_access_in_private_automation_hub/making-open-source-more-inclusive
2.2.8. Java
2.2.8. Java The java-1.6.0-openjdk package adds support for the Java programming language. This package provides the java interpreter. The java-1.6.0-openjdk-devel package contains the javac compiler, as well as the libraries and header files required for developing Java extensions. Similarly, Red Hat Enterprise Linux also provides Java 7 via the java-1.7.0-openjdk* packages and Java 8 via the java-1.8.0-openjdk* packages. 2.2.8.1. Java Documentation For more information about Java, see man java . Some associated utilities also have their own respective man pages. You can also install other Java documentation packages for more details about specific Java utilities. By convention, such documentation packages have the javadoc suffix (for example, dbus-java-javadoc ). The main site for the development of Java is hosted on http://openjdk.java.net/ . The main site for the library runtime of Java is hosted on http://icedtea.classpath.org .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/libraries.java
Upgrade Red Hat Quay
Upgrade Red Hat Quay Red Hat Quay 3 Upgrade Red Hat Quay Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/upgrade_red_hat_quay/index
Chapter 7. HostFirmwareComponents [metal3.io/v1alpha1]
Chapter 7. HostFirmwareComponents [metal3.io/v1alpha1] Description HostFirmwareComponents is the Schema for the hostfirmwarecomponents API. Type object 7.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object HostFirmwareComponentsSpec defines the desired state of HostFirmwareComponents. status object HostFirmwareComponentsStatus defines the observed state of HostFirmwareComponents. 7.1.1. .spec Description HostFirmwareComponentsSpec defines the desired state of HostFirmwareComponents. Type object Required updates Property Type Description updates array updates[] object FirmwareUpdate defines a firmware update specification. 7.1.2. .spec.updates Description Type array 7.1.3. .spec.updates[] Description FirmwareUpdate defines a firmware update specification. Type object Required component url Property Type Description component string url string 7.1.4. .status Description HostFirmwareComponentsStatus defines the observed state of HostFirmwareComponents. Type object Property Type Description components array Components is the list of all available firmware components and their information. components[] object FirmwareComponentStatus defines the status of a firmware component. conditions array Track whether updates stored in the spec are valid based on the schema conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } lastUpdated string Time that the status was last updated updates array Updates is the list of all firmware components that should be updated they are specified via name and url fields. updates[] object FirmwareUpdate defines a firmware update specification. 7.1.5. .status.components Description Components is the list of all available firmware components and their information. Type array 7.1.6. .status.components[] Description FirmwareComponentStatus defines the status of a firmware component. Type object Required component initialVersion Property Type Description component string currentVersion string initialVersion string lastVersionFlashed string updatedAt string 7.1.7. .status.conditions Description Track whether updates stored in the spec are valid based on the schema Type array 7.1.8. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 7.1.9. .status.updates Description Updates is the list of all firmware components that should be updated they are specified via name and url fields. Type array 7.1.10. .status.updates[] Description FirmwareUpdate defines a firmware update specification. Type object Required component url Property Type Description component string url string 7.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/hostfirmwarecomponents GET : list objects of kind HostFirmwareComponents /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents DELETE : delete collection of HostFirmwareComponents GET : list objects of kind HostFirmwareComponents POST : create HostFirmwareComponents /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents/{name} DELETE : delete HostFirmwareComponents GET : read the specified HostFirmwareComponents PATCH : partially update the specified HostFirmwareComponents PUT : replace the specified HostFirmwareComponents /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents/{name}/status GET : read status of the specified HostFirmwareComponents PATCH : partially update status of the specified HostFirmwareComponents PUT : replace status of the specified HostFirmwareComponents 7.2.1. /apis/metal3.io/v1alpha1/hostfirmwarecomponents HTTP method GET Description list objects of kind HostFirmwareComponents Table 7.1. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponentsList schema 401 - Unauthorized Empty 7.2.2. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents HTTP method DELETE Description delete collection of HostFirmwareComponents Table 7.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind HostFirmwareComponents Table 7.3. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponentsList schema 401 - Unauthorized Empty HTTP method POST Description create HostFirmwareComponents Table 7.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.5. Body parameters Parameter Type Description body HostFirmwareComponents schema Table 7.6. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 201 - Created HostFirmwareComponents schema 202 - Accepted HostFirmwareComponents schema 401 - Unauthorized Empty 7.2.3. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents/{name} Table 7.7. Global path parameters Parameter Type Description name string name of the HostFirmwareComponents HTTP method DELETE Description delete HostFirmwareComponents Table 7.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified HostFirmwareComponents Table 7.10. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified HostFirmwareComponents Table 7.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.12. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified HostFirmwareComponents Table 7.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.14. Body parameters Parameter Type Description body HostFirmwareComponents schema Table 7.15. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 201 - Created HostFirmwareComponents schema 401 - Unauthorized Empty 7.2.4. /apis/metal3.io/v1alpha1/namespaces/{namespace}/hostfirmwarecomponents/{name}/status Table 7.16. Global path parameters Parameter Type Description name string name of the HostFirmwareComponents HTTP method GET Description read status of the specified HostFirmwareComponents Table 7.17. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified HostFirmwareComponents Table 7.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.19. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified HostFirmwareComponents Table 7.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.21. Body parameters Parameter Type Description body HostFirmwareComponents schema Table 7.22. HTTP responses HTTP code Reponse body 200 - OK HostFirmwareComponents schema 201 - Created HostFirmwareComponents schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/hostfirmwarecomponents-metal3-io-v1alpha1
Preface
Preface The Red Hat Developer Hub is an enterprise-grade, integrated developer platform, extended through plugins, that helps reduce the friction and frustration of developers while boosting their productivity.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/configuring_plugins_in_red_hat_developer_hub/pr01
Appendix A. The Source-to-Image (S2I) build process
Appendix A. The Source-to-Image (S2I) build process Source-to-Image (S2I) is a build tool for generating reproducible Docker-formatted container images from online SCM repositories with application sources. With S2I builds, you can easily deliver the latest version of your application into production with shorter build times, decreased resource and network usage, improved security, and a number of other advantages. OpenShift supports multiple build strategies and input sources . For more information, see the Source-to-Image (S2I) Build chapter of the OpenShift Container Platform documentation. You must provide three elements to the S2I process to assemble the final container image: The application sources hosted in an online SCM repository, such as GitHub. The S2I Builder image, which serves as the foundation for the assembled image and provides the ecosystem in which your application is running. Optionally, you can also provide environment variables and parameters that are used by S2I scripts . The process injects your application source and dependencies into the Builder image according to instructions specified in the S2I script, and generates a Docker-formatted container image that runs the assembled application. For more information, check the S2I build requirements , build options and how builds work sections of the OpenShift Container Platform documentation.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/eclipse_vert.x_runtime_guide/the-source-to-image-s2i-build-process
Chapter 83. Kubernetes Service Account
Chapter 83. Kubernetes Service Account Since Camel 2.17 Only producer is supported The Kubernetes Service Account component is one of the Kubernetes Components which provides a producer to execute Kubernetes Service Account operations. 83.1. Dependencies When using kubernetes-service-accounts with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 83.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 83.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 83.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 83.3. Component Options The Kubernetes Service Account component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 83.4. Endpoint Options The Kubernetes Service Account endpoint is configured using URI syntax: with the following path and query parameters: 83.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 83.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 83.5. Message Headers The Kubernetes Service Account component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesServiceAccountsLabels (producer) Constant: KUBERNETES_SERVICE_ACCOUNTS_LABELS The service account labels. Map CamelKubernetesServiceAccountName (producer) Constant: KUBERNETES_SERVICE_ACCOUNT_NAME The service account name. String CamelKubernetesServiceAccount (producer) Constant: KUBERNETES_SERVICE_ACCOUNT A service account object. ServiceAccount 83.6. Supported producer operation listServiceAccounts listServiceAccountsByLabels getServiceAccount createServiceAccount updateServiceAccount deleteServiceAccount 83.7. Kubernetes ServiceAccounts Produce Examples listServiceAccounts: this operation lists the service account on a kubernetes cluster. from("direct:list"). toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccounts"). to("mock:result"); This operation returns a List of services from your cluster. listServiceAccountsByLabels: this operation lists the service account by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_ACCOUNTS_LABELS, labels); } }); toF("kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccountsByLabels"). to("mock:result"); This operation returns a List of Services from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 83.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-service-accounts:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccounts\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_ACCOUNTS_LABELS, labels); } }); toF(\"kubernetes-service-accounts:///?kubernetesClient=#kubernetesClient&operation=listServiceAccountsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-service-account-component-starter
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To provide feedback, you can highlight the text in a document and add comments. Follow the steps in the procedure to learn about submitting feedback on Red Hat documentation. Prerequisites Log in to the Red Hat Customer Portal. In the Red Hat Customer Portal, view the document in HTML format. Procedure Click the Feedback button to see existing reader comments. Note The feedback feature is enabled only in the HTML format. Highlight the section of the document where you want to provide feedback. In the prompt menu that opens near the text you selected, click Add Feedback . A text box opens in the feedback section on the right side of the page. Enter your feedback in the text box and click Submit . You have created a documentation issue. To view the issue, click the issue tracker link in the feedback view.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/release_notes_for_the_red_hat_build_of_cryostat_2.2/proc-providing-feedback-on-redhat-documentation_cryostat
B.5. Defining Directories Using LDIF
B.5. Defining Directories Using LDIF The contents of an entire directory can be defined using LDIF. Using LDIF is an efficient method of directory creation when there are many entries to add to the directory. To create a directory using LDIF: Create an ASCII file containing the entries to add in LDIF format. Make sure each entry is separated from the by an empty line. Use just one line between entries, and make sure the first line of the file is not be blank, or else the ldapmodify utility will exit. For more information, see Section B.4, "Specifying Directory Entries Using LDIF" . Begin each file with the topmost, or root, entry in the database. The root entry must represent the suffix or sub-suffix contained by the database. For example, if the database has the suffix dc=example,dc=com , the first entry in the directory must be dn: dc=example,dc=com . For information on suffixes, see the "Suffix" parameter described in the Red Hat Directory Server Configuration, Command, and File Reference . Make sure that an entry representing a branch point in the LDIF file is placed before the entries to create under that branch. For example, to place an entry in a people and a group subtree, create the branch point for those subtrees before creating entries within those subtrees. Note The LDIF file is read in order, so parent entries must be listed before the child entries. Create the directory from the LDIF file using one of the following methods: Initializing the database through the web console. Use this method if there is a small database to import (less than 10,000 entries). See Section 6.1.3, "Importing Data Using the Web Console" . Warning This method is destructive and will erase any existing data in the suffix. ldif2db or ldif2db.pl command-line utility. Use this method if there is a large database to import (more than 10,000 entries). See Section 6.1.2.2, "Importing Data While the Server is Offline" . ldif2db cannot be used if the server is running. ldif2db.pl can only be used if the server is running. Warning This method is destructive and will erase any existing data in the suffix. ldapmodify command-line utility with the -a parameter. Use this method if a new subtree is being added to an existing database or there is existing data in the suffix which should not be deleted. Unlike the other methods for creating the directory from an LDIF file, Directory Server must be running before a subtree can be added using ldapmodify . See Section 3.1.3, "Adding an Entry" . Example B.1. LDIF File Example This LDIF file contains one domain, two organizational units, and three organizational person entries:
[ "dn: dc=example,dc=com objectclass: top objectclass: domain dc: example description: Fictional example domain dn: ou=People,dc=example,dc=com objectclass: top objectclass: organizationalUnit ou: People description: Fictional example organizational unit tel: 555-5559 dn: cn=June Rossi,ou=People,dc=example,dc=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: June Rossi sn: Rossi givenName: June mail: [email protected] userPassword: {sha}KDIE3AL9DK ou: Accounting ou: people telephoneNumber: 2616 roomNumber: 220 dn: cn=Marc Chambers,ou=People,dc=example,dc=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Marc Chambers sn: Chambers givenname: Marc mail: [email protected] userPassword: {sha}jdl2alem87dlacz1 telephoneNumber: 2652 ou: Manufacturing ou: People roomNumber: 167 dn: cn=Robert Wong,ou=People,example.com Corp,dc=example,dc=com objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: Robert Wong cn: Bob Wong sn: Wong givenname: Robert givenname: Bob mail: [email protected] userPassword: {sha}nn2msx761 telephoneNumber: 2881 roomNumber: 211 ou: Manufacturing ou: people dn: ou=Groups,dc=example,dc=com objectclass: top objectclass: organizationalUnit ou: groups description: Fictional example organizational unit" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/ldap_data_interchange_format-defining_directories_using_ldif
Chapter 3. Installing the Migration Toolkit for Applications user interface
Chapter 3. Installing the Migration Toolkit for Applications user interface You can install the Migration Toolkit for Applications (MTA) user interface on all Red Hat OpenShift cloud services and Red Hat OpenShift self-managed editions. Important To be able to create MTA instances, you must first install the MTA Operator. The MTA Operator is a structural layer that manages resources deployed on OpenShift, such as database, front end, and back end, to automatically create an MTA instance. 3.1. Persistent volume requirements To successfully deploy, the MTA Operator requires 2 RWO persistent volumes (PVs) used by different components. If the rwx_supported configuration option is set to true , the MTA Operator requires an additional 2 RWX PVs that are used by Maven and the hub file storage. The PVs are described in the following table: Table 3.1. Required persistent volumes Name Default size Access mode Description hub database 10Gi RWO Hub database hub bucket 100Gi RWX Hub file storage; required if the rwx_supported configuration option is set to true keycloak postgresql 1Gi RWO Keycloak back end database cache 100Gi RWX Maven m2 cache; required if the rwx_supported configuration option is set to true 3.2. Installing the Migration Toolkit for Applications Operator and the user interface You can install the Migration Toolkit for Applications (MTA) and the user interface on Red Hat OpenShift versions 4.13-4.15. Prerequisites 4 vCPUs, 8 GB RAM, and 40 GB persistent storage. Any cloud services or self-hosted edition of Red Hat OpenShift on versions 4.13-4.15. You must be logged in as a user with cluster-admin permissions. For more information, see OpenShift Operator Life Cycles . Procedure In the Red Hat OpenShift web console, click Operators OperatorHub . Use the Filter by keyword field to search for MTA . Click the Migration Toolkit for Applications Operator and then click Install . On the Install Operator page, click Install . Click Operators Installed Operators to verify that the MTA Operator appears in the openshift-mta project with the status Succeeded . Click the MTA Operator. Under Provided APIs , locate Tackle , and click Create Instance . The Create Tackle window opens in Form view. Review the custom resource (CR) settings. The default choices should be acceptable, but make sure to check the system requirements for storage, memory, and cores. To work directly with the YAML file, click YAML view and review the CR settings that are listed in the spec section of the YAML file. The most commonly used CR settings are listed in this table: Table 3.2. Tackle CR settings Name Default Description cache_data_volume_size 100Gi Size requested for the cache volume; ignored when rwx_supported=false cache_storage_class Default storage class Storage class used for the cache volume; ignored when rwx_supported=false feature_auth_required True Flag to indicate whether keycloak authorization is required (single user/"noauth") feature_isolate_namespace True Flag to indicate whether namespace isolation using network policies is enabled hub_database_volume_size 10Gi Size requested for the Hub database volume hub_bucket_volume_size 100Gi Size requested for the Hub bucket volume hub_bucket_storage_class Default storage class Storage class used for the bucket volume keycloak_database_data_volume_size 1Gi Size requested for the Keycloak database volume pathfinder_database_data_volume_size 1Gi Size requested for the Pathfinder database volume maven_data_volume_size 100Gi Size requested for the Maven m2 cache volume; deprecated in MTA 6.0.1 rwx_storage_class NA Storage class requested for the Tackle RWX volumes; deprecated in MTA 6.0.1 rwx_supported True Flag to indicate whether the cluster storage supports RWX mode rwo_storage_class NA Storage class requested for the Tackle RW0 volumes rhsso_external_access False Flag to indicate whether a dedicated route is created to access the MTA managed RHSSO instance analyzer_container_limits_cpu 1 Maximum number of CPUs the pod is allowed to use analyzer_container_limits_memory 4Gi Maximum amount of memory the pod is allowed to use. You can increase this limit if the pod displays OOMKilled errors. analyzer_container_requests_cpu 1 Minimum number of CPUs the pod needs to run analyzer_container_requests_memory 4Gi Minimum amount of memory the pod needs to run Example YAML file kind: Tackle apiVersion: tackle.konveyor.io/v1alpha1 metadata: name: mta namespace: openshift-mta spec: hub_bucket_volume_size: "25Gi" maven_data_volume_size: "25Gi" rwx_supported: "false" Edit the CR settings if needed, and then click Create . In Administration view, click Workloads Pods to verify that the MTA pods are running. Access the user interface from your browser by using the route exposed by the mta-ui application within OpenShift. Use the following credentials to log in: User name : admin Password : Passw0rd! When prompted, create a new password. 3.2.1. Eviction threshold Each node has a certain amount of memory allocated to it. Some of that memory is reserved for system services. The rest of the memory is intended for running pods. If the pods use more than their allocated amount of memory, an out-of-memory event is triggered and the node is terminated with a OOMKilled error. To prevent out-of-memory events and protect nodes, use the --eviction-hard setting. This setting specifies the threshold of memory availability below which the node evicts pods. The value of the setting can be absolute or a percentage. Example of node memory allocation settings Node capacity: 32Gi --system-reserved setting: 3Gi --eviction-hard setting: 100Mi The amount of memory available for running pods on this node is 28.9 GB. This amount is calculated by subtracting the system-reserved and eviction-hard values from the overall capacity of the node. If the memory usage exceeds this amount, the node starts evicting pods. 3.3. Red Hat Single Sign-On The MTA uses Red Hat Single Sign-On (RHSSO) instance for user authentication and authorization. The MTA operator manages the RHSSO instance and configures a dedicated realm with necessary roles and permissions. MTA-managed RHSSO instance allows you to perform advanced RHSSO configurations, such as adding a provider for User Federation or integrating identity providers . To access the RHSSO Admin Console , enter the URL https://<_route_>/auth/admin in your browser by replacing <route> with the MTA web console address. Example: MTA web console: https://mta-openshiftmta.example.com/ RHSSO Admin console: https://mta-openshiftmta.example.com/auth/admin The admin credentials for RHSSO are stored in a secret file named credential-mta-rhsso in the namespace where MTA is installed. To retrieve your admin credentials, run the following command: To create a dedicated route for the RHSSO instance, set the rhsso_external_access parameter to true in the Tackle custom resource (CR) for MTA. No multi-user access restrictions on resources There are no multi-user access restrictions on resources. For example, an analyzer task created by a user can be canceled by any other user. Additional resources Configuring LDAP and Active Directory in RHSSO Red Hat Single Sign-On features and concepts 3.3.1. Roles, Personas, Users, and Permissions MTA makes use of three roles, each of which corresponds to a persona: Table 3.3. Roles and personas Role Persona tackle-admin Administrator tackle-architect Architect tackle-migrator Migrator The roles are already defined in your RHSSO instance. You do not need to create them. If you are an MTA administrator, you can create users in your RHSSO and assign each user one or more roles, one role per persona. 3.3.1.1. Roles, personas, and access to user interface views Although a user can have more than one role, each role corresponds to a specific persona: Administrator: An administrator has all the permissions that architects and migrators have, along with the ability to create some application-wide configuration parameters that other users can consume but cannot change or view. Examples: Git credentials, Maven settings.xml files. Architect: A technical lead for the migration project who can run assessments and can create and modify applications and information related to them. An architect cannot modify or delete sensitive information, but can consume it. Example: Associate an existing credential to the repository of a specific application. Migrator: A user who can analyze applications, but not create, modify, or delete them. As described in User interface views , MTA has two views, Administration and Migration . Only administrators can access Administration view. Architects and migrators have no access to Administration view, they cannot even see it. Administrators can perform all actions supported by Migration view. Architects and migrators can see all elements of Migration view, but their ability to perform actions in Migration view depends on the permissions granted to their role. The ability of administrators, architects, and migrators to access the Administration and Migration views of the MTA user interface is summarized in the table below: Table 3.4. Roles vs. access to MTA views Menu Architect Migrator Admin Administration No No Yes Migration Yes Yes Yes 3.3.1.2. Roles and permissions The following table contains the roles and permissions (scopes) that MTA seeds the managed RHSSO instance with: tackle-admin Resource Name Verbs addons delete get post put adoptionplans post get post put applications delete get post put applications.facts delete get post put applications.tags delete get post put applications.bucket delete get post put assessments delete get patch post put businessservices delete get post put dependencies delete get post put identities delete get post put imports delete get post put jobfunctions delete get post put proxies delete get post put reviews delete get post put settings delete get post put stakeholdergroups delete get post put stakeholders delete get post put tags delete get post put tagtypes delete get post put tasks delete get post put tasks.bucket delete get post put tickets delete get post put trackers delete get post put cache delete get files delete get post put rulebundles delete get post put tackle-architect Resource Name Verbs addons delete get post put applications.bucket delete get post put adoptionplans post applications delete get post put applications.facts delete get post put applications.tags delete get post put assessments delete get patch post put businessservices delete get post put dependencies delete get post put identities get imports delete get post put jobfunctions delete get post put proxies get reviews delete get post put settings get stakeholdergroups delete get post put stakeholders delete get post put tags delete get post put tagtypes delete get post put tasks delete get post put tasks.bucket delete get post put trackers get tickets delete get post put cache get files delete get post put rulebundles delete get post put tackle-migrator Resource Name Verbs addons get adoptionplans post applications get applications.facts get applications.tags get applications.bucket get assessments get post businessservices get dependencies delete get post put identities get imports get jobfunctions get proxies get reviews get post put settings get stakeholdergroups get stakeholders get tags get tagtypes get tasks delete get post put tasks.bucket delete get post put tackers get tickets get cache get files get rulebundles get 3.4. Installing and configuring the Migration Toolkit for Applications Operator in a Red Hat OpenShift Local environment Red Hat OpenShift Local provides a quick and easy way to set up a local OpenShift cluster on your desktop or laptop. This local cluster allows you to test your applications and configuration parameters before sending them to production. 3.4.1. Operating system requirements Red Hat OpenShift Local requires the following minimum version of a supported operating system: 3.4.1.1. Red Hat OpenShift Local requirements on Microsoft Windows On Microsoft Windows, Red Hat OpenShift Local requires the Windows 10 Fall Creators Update (version 1709) or later. Red Hat OpenShift Local does not run on earlier versions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported. 3.4.1.2. Red Hat OpenShift Local requirements on macOS On macOS, Red Hat OpenShift Local requires macOS 11 Big Sur or later. Red Hat OpenShift Local does not run on earlier versions of macOS. 3.4.1.3. Red Hat OpenShift Local requirements on Linux On Linux, Red Hat OpenShift Local is supported only on the latest two Red Hat Enterprise Linux 8 and 9 minor releases and on the latest two stable Fedora releases. When using Red Hat Enterprise Linux, the machine running Red Hat OpenShift Local must be registered with the Red Hat Customer Portal. Ubuntu 18.04 LTS or later and Debian 10 or later are not supported and might require manual setup of the host machine. 3.4.1.3.1. Required software packages for Linux Red Hat OpenShift Local requires the libvirt and NetworkManager packages to run on Linux: On Fedora and Red Hat Enterprise Linux run: sudo dnf install NetworkManager On Debian/Ubuntu run: sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system network-manager 3.4.2. Installing the Migration Toolkit for Applications Operator in a Red Hat OpenShift Local environment To install Red Hat OpenShift Local: Download the latest release of Red Hat OpenShift Local for your platform. Download OpenShift Local . Download pull secret . Assuming you saved the archive in the ~/Downloads directory , follow these steps: cd ~/Downloads tar xvf crc-linux-amd64.tar.xz Copy the crc executable to it: cp ~/Downloads/crc-linux-<version-number>-amd64/crc ~/bin/crc Add the ~/bin/crc directory to your USDPATH variable: export PATH=USDPATH:USDHOME/bin/crc echo 'export PATH=USDPATH:USDHOME/bin/crc' >> ~/.bashrc To disable telemetry, run the following command: crc config set consent-telemetry no Note For macOS, download the relevant crc-macos-installer.pkg . Navigate to Downloads using Finder . Double-click on crc-macos-installer.pkg . 3.4.3. Setting up Red Hat OpenShift Local The crc setup command performs operations to set up the environment of your host machine for the Red Hat OpenShift Local instance. The crc setup command creates the ~/.crc directory . Set up your host machine for Red Hat OpenShift Local: crc setup 3.4.4. Starting the Red Hat OpenShift Local instance Red Hat OpenShift Local presets represent a managed container runtime, and the lower bounds of system resources required by the instance to run it. Note On Linux or macOS, ensure that your user account has permission to use the sudo command. On Microsoft Windows, ensure that your user account can elevate to Administrator privileges. The crc start command starts the Red Hat OpenShift Local instance and configured container runtime. It offers the following flags: Flags Type Description Default value -b, --bundle string Bundle path/URI - absolute or local path, HTTP, HTTPS or docker URI, for example, 'https://foo.com/crc_libvirt_4.15.14_amd64.crcbundle', 'docker://quay.io/myorg/crc_libvirt_4.15.14_amd64.crcbundle:2.37.1' default '/home/<user>/.crc/cache/ crc_libvirt_4.15.14_amd64.crcbundle' -c, -cpus int Number of CPU cores to assign to the instance 4 -disable-update-check Do not check for update -d, -disk-size uint Total size in GB of the disk used by the instance 31 -h, -help Help for start -m, -memory int Mi of memory to assign to the instance 10752 -n, -nameserver string IPv4 address of name server to use for the instance -o, -output string Output format in JSON -p, -pull-secret-file string File path of image pull secret (download from https://console.redhat.com/openshift/create/local ) It also offers the following global flags: Flags Type Description Default value -log-level string log level for example: * debug * info * warn * error info The default configuration creates a virtual machine (VM) with 4 virtual CPUs, a disk size of 31 GB, and 10 GB of RAM. However, this default configuration is not sufficent to stably run MTA. To increase the number of virtual CPUs to 6, the disk-size to 200 GB, and the memory to 20 GB, run crc config as follows: crc config set cpus 6 crc config set disk-size 200 USD crc config set memory 20480 To check the configuration, run: crc config view Example Output - consent-telemetry : yes - cpus : 6 - disk-size : 200 - memory : 16384 Note Changes to the inputted configuration property are only applied when the CRC instance is started. If you already have a running CRC instance, for this configuration change to take effect, stop the CRC instance with crc stop and restart it with crc start . 3.4.5. Checking the status of Red Hat OpenShift Local instance To check the status of your Red Hat OpenShift Local instance, run: crc status Example Output CRC VM: Running OpenShift: Starting (v4.15.14) RAM Usage: 9.25GB of 20.97GB Disk Usage: 31.88GB of 212.8GB (Inside the CRC VM) Cache Usage: 26.83GB Cache Directory: /home/<user>/.crc/cache 3.4.6. Configuration of the Migration Toolkit for Applications Operator in a Red Hat OpenShift Local environment The following table shows the recommended minimum configurations of Red Hat OpenShift Local tested: Memory (Gi) CPU Disk sze (Gi) 20Gi 5 110Gi 20Gi 5 35Gi , with the MTA Operator configurations cache_data_volume_size and hub_bucket_volume_size set to 5Gi . 3.5. Adding minimum requirements for Java analyzer and discovery There is a minimum requirement for the Java analyzer, and also the discovery task, which by default is set to 2 GB. While this minimum requirement can be lowered to 1.5 GB, it is not recommended. You can also increase this minimum requirement to more than 2 GB. kind: Tackle apiVersion: tackle.konveyor.io/v1alpha1 metadata: name: tackle namespace: openshift-mta spec: feature_auth_required: 'true' provider_java_container_limits_memory: 2Gi provider_java_container_requests_memory: 2Gi Note To guarantee scheduling has the correct space, provider_java_container_limits_memory and provider_java_container_requests_memory should be assigned the same amount of space.
[ "kind: Tackle apiVersion: tackle.konveyor.io/v1alpha1 metadata: name: mta namespace: openshift-mta spec: hub_bucket_volume_size: \"25Gi\" maven_data_volume_size: \"25Gi\" rwx_supported: \"false\"", "get secret credential-mta-rhsso -o yaml", "sudo dnf install NetworkManager", "sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system network-manager", "cd ~/Downloads", "tar xvf crc-linux-amd64.tar.xz", "cp ~/Downloads/crc-linux-<version-number>-amd64/crc ~/bin/crc", "export PATH=USDPATH:USDHOME/bin/crc", "echo 'export PATH=USDPATH:USDHOME/bin/crc' >> ~/.bashrc", "crc config set consent-telemetry no", "crc setup", "crc config set cpus 6", "crc config set disk-size 200", "crc config set memory 20480", "crc config view", "- consent-telemetry : yes - cpus : 6 - disk-size : 200 - memory : 16384", "crc status", "CRC VM: Running OpenShift: Starting (v4.15.14) RAM Usage: 9.25GB of 20.97GB Disk Usage: 31.88GB of 212.8GB (Inside the CRC VM) Cache Usage: 26.83GB Cache Directory: /home/<user>/.crc/cache", "kind: Tackle apiVersion: tackle.konveyor.io/v1alpha1 metadata: name: tackle namespace: openshift-mta spec: feature_auth_required: 'true' provider_java_container_limits_memory: 2Gi provider_java_container_requests_memory: 2Gi" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.1/html/user_interface_guide/mta-7-installing-web-console-on-openshift_user-interface-guide
Chapter 2. Configuring an IBM Cloud account
Chapter 2. Configuring an IBM Cloud account Before you can install OpenShift Container Platform, you must configure an IBM Cloud(R) account. Important IBM Power Virtual Server using installer-provisioned infrastructure is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. Prerequisites You have an IBM Cloud(R) account with a subscription. You cannot install OpenShift Container Platform on a free or on a trial IBM Cloud(R) account. 2.2. Quotas and limits on IBM Power Virtual Server The OpenShift Container Platform cluster uses several IBM Cloud(R) and IBM Power(R) Virtual Server components, and the default quotas and limits affect your ability to install OpenShift Container Platform clusters. If you use certain cluster configurations, deploy your cluster in certain regions, or run multiple clusters from your account, you might need to request additional resources for your IBM Cloud(R) account. For a comprehensive list of the default IBM Cloud(R) quotas and service limits, see the IBM Cloud(R) documentation for Quotas and service limits . Virtual Private Cloud Each OpenShift Container Platform cluster creates its own Virtual Private Cloud (VPC). The default quota of VPCs per region is 10. If you have 10 VPCs created, you will need to increase your quota before attempting an installation. Application load balancer By default, each cluster creates two application load balancers (ALBs): Internal load balancer for the control plane API server External load balancer for the control plane API server You can create additional LoadBalancer service objects to create additional ALBs. The default quota of VPC ALBs are 50 per region. To have more than 50 ALBs, you must increase this quota. VPC ALBs are supported. Classic ALBs are not supported for IBM Power(R) Virtual Server. Cloud connections There is a limit of two cloud connections per IBM Power(R) Virtual Server instance. It is recommended that you have only one cloud connection in your IBM Power(R) Virtual Server instance to serve your cluster. Note Cloud Connections are no longer supported in dal10 . A transit gateway is used instead. Dynamic Host Configuration Protocol Service There is a limit of one Dynamic Host Configuration Protocol (DHCP) service per IBM Power(R) Virtual Server instance. Networking Due to networking limitations, there is a restriction of one OpenShift cluster installed through IPI per zone per account. This is not configurable. Virtual Server Instances By default, a cluster creates server instances with the following resources : 0.5 CPUs 32 GB RAM System Type: s922 Processor Type: uncapped , shared Storage Tier: Tier-3 The following nodes are created: One bootstrap machine, which is removed after the installation is complete Three control plane nodes Three compute nodes For more information, see Creating a Power Systems Virtual Server in the IBM Cloud(R) documentation. 2.3. Configuring DNS resolution How you configure DNS resolution depends on the type of OpenShift Container Platform cluster you are installing: If you are installing a public cluster, you use IBM Cloud(R) Internet Services (CIS). If you are installing a private cluster, you use IBM Cloud(R) DNS Services (DNS Services). 2.4. Using IBM Cloud Internet Services for DNS resolution The installation program uses IBM Cloud(R) Internet Services (CIS) to configure cluster DNS resolution and provide name lookup for a public cluster. Note This offering does not support IPv6, so dual stack or IPv6 environments are not possible. You must create a domain zone in CIS in the same account as your cluster. You must also ensure the zone is authoritative for the domain. You can do this using a root domain or subdomain. Prerequisites You have installed the IBM Cloud(R) CLI . You have an existing domain and registrar. For more information, see the IBM(R) documentation . Procedure Create a CIS instance to use with your cluster: Install the CIS plugin: USD ibmcloud plugin install cis Log in to IBM Cloud(R) by using the CLI: USD ibmcloud login Create the CIS instance: USD ibmcloud cis instance-create <instance_name> standard- 1 1 At a minimum, you require a Standard plan for CIS to manage the cluster subdomain and its DNS records. Note After you have configured your registrar or DNS provider, it can take up to 24 hours for the changes to take effect. Connect an existing domain to your CIS instance: Set the context instance for CIS: USD ibmcloud cis instance-set <instance_CRN> 1 1 The instance CRN (Cloud Resource Name). For example: ibmcloud cis instance-set crn:v1:bluemix:public:power-iaas:osa21:a/65b64c1f1c29460d8c2e4bbfbd893c2c:c09233ac-48a5-4ccb-a051-d1cfb3fc7eb5:: Add the domain for CIS: USD ibmcloud cis domain-add <domain_name> 1 1 The fully qualified domain name. You can use either the root domain or subdomain value as the domain name, depending on which you plan to configure. Note A root domain uses the form openshiftcorp.com . A subdomain uses the form clusters.openshiftcorp.com . Open the CIS web console , navigate to the Overview page, and note your CIS name servers. These name servers will be used in the step. Configure the name servers for your domains or subdomains at the domain's registrar or DNS provider. For more information, see the IBM Cloud(R) documentation . 2.5. IBM Cloud IAM Policies and API Key To install OpenShift Container Platform into your IBM Cloud(R) account, the installation program requires an IAM API key, which provides authentication and authorization to access IBM Cloud(R) service APIs. You can use an existing IAM API key that contains the required policies or create a new one. For an IBM Cloud(R) IAM overview, see the IBM Cloud(R) documentation . 2.5.1. Pre-requisite permissions Table 2.1. Pre-requisite permissions Role Access Viewer, Operator, Editor, Administrator, Reader, Writer, Manager Internet Services service in <resource_group> resource group Viewer, Operator, Editor, Administrator, User API key creator, Service ID creator IAM Identity Service service Viewer, Operator, Administrator, Editor, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service in <resource_group> resource group Viewer Resource Group: Access to view the resource group itself. The resource type should equal Resource group , with a value of <your_resource_group_name>. 2.5.2. Cluster-creation permissions Table 2.2. Cluster-creation permissions Role Access Viewer <resource_group> (Resource Group Created for Your Team) Viewer, Operator, Editor, Reader, Writer, Manager All service in Default resource group Viewer, Reader Internet Services service Viewer, Operator, Reader, Writer, Manager, Content Reader, Object Reader, Object Writer, Editor Cloud Object Storage service Viewer Default resource group: The resource type should equal Resource group , with a value of Default . If your account administrator changed your account's default resource group to something other than Default, use that value instead. Viewer, Operator, Editor, Reader, Manager IBM Power(R) Virtual Server service in <resource_group> resource group Viewer, Operator, Editor, Reader, Writer, Manager, Administrator Internet Services service in <resource_group> resource group: CIS functional scope string equals reliability Viewer, Operator, Editor Direct Link service Viewer, Operator, Editor, Administrator, Reader, Writer, Manager, Console Administrator VPC Infrastructure Services service <resource_group> resource group 2.5.3. Access policy assignment In IBM Cloud(R) IAM, access policies can be attached to different subjects: Access group (Recommended) Service ID User The recommended method is to define IAM access policies in an access group . This helps organize all the access required for OpenShift Container Platform and enables you to onboard users and service IDs to this group. You can also assign access to users and service IDs directly, if desired. 2.5.4. Creating an API key You must create a user API key or a service ID API key for your IBM Cloud(R) account. Prerequisites You have assigned the required access policies to your IBM Cloud(R) account. You have attached you IAM access policies to an access group, or other appropriate resource. Procedure Create an API key, depending on how you defined your IAM access policies. For example, if you assigned your access policies to a user, you must create a user API key . If you assigned your access policies to a service ID, you must create a service ID API key . If your access policies are assigned to an access group, you can use either API key type. For more information on IBM Cloud(R) API keys, see Understanding API keys . 2.6. Supported IBM Power Virtual Server regions and zones You can deploy an OpenShift Container Platform cluster to the following regions: dal (Dallas, USA) dal10 dal12 us-east (Washington DC, USA) us-east eu-de (Frankfurt, Germany) eu-de-1 eu-de-2 lon (London, UK) lon04 lon06 osa (Osaka, Japan) osa21 sao (Sao Paulo, Brazil) sao01 syd (Sydney, Australia) syd04 tok (Tokyo, Japan) tok04 tor (Toronto, Canada) tor01 You might optionally specify the IBM Cloud(R) region in which the installer will create any VPC components. Supported regions in IBM Cloud(R) are: us-south eu-de eu-gb jp-osa au-syd br-sao ca-tor jp-tok 2.7. steps Creating an IBM Power(R) Virtual Server workspace
[ "ibmcloud plugin install cis", "ibmcloud login", "ibmcloud cis instance-create <instance_name> standard-next 1", "ibmcloud cis instance-set <instance_CRN> 1", "ibmcloud cis domain-add <domain_name> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power_virtual_server/installing-ibm-cloud-account-power-vs
Chapter 35. Kamelet
Chapter 35. Kamelet Both producer and consumer are supported The Kamelet Component provides support for interacting with the Camel Route Template engine using Endpoint semantic. 35.1. URI format kamelet:templateId/routeId[?options] 35.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 35.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 35.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 35.3. Component Options The Kamelet component supports 9 options, which are listed below. Name Description Default Type location (common) The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma. classpath:/kamelets String routeProperties (common) Set route local parameters. Map templateProperties (common) Set template local parameters. Map bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean block (producer) If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean routeTemplateLoaderListener (advanced) Autowired To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources. RouteTemplateLoaderListener 35.4. Endpoint Options The Kamelet endpoint is configured using URI syntax: with the following path and query parameters: 35.4.1. Path Parameters (2 parameters) Name Description Default Type templateId (common) Required The Route Template ID. String routeId (common) The Route ID. Default value notice: The ID will be auto-generated if not provided. String 35.4.2. Query Parameters (8 parameters) Name Description Default Type location (common) Location of the Kamelet to use which can be specified as a resource from file system, classpath etc. The location cannot use wildcards, and must refer to a file including extension, for example file:/etc/foo-kamelet.xml. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern block (producer) If sending a message to a direct endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true boolean failIfNoConsumers (producer) Whether the producer should fail by throwing an exception, when sending to a kamelet endpoint with no active consumers. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean timeout (producer) The timeout value to use if block is enabled. 30000 long Note The kamelet endpoint is lenient , which means that the endpoint accepts additional parameters that are passed to the engine and consumed upon route materialization. 35.5. Discovery If a Route Template is not found, the kamelet endpoint tries to load the related kamelet definition from the file system (by default classpath:/kamelets ). The default resolution mechanism expect kamelet files to have the extension .kamelet.yaml . 35.6. Samples Kamelets can be used as if they were standard Camel components. For example, suppose that we have created a Route Template as follows: routeTemplate("setMyBody") .templateParameter("bodyValue") .from("kamelet:source") .setBody().constant("{{bodyValue}}"); Note To let the Kamelet component wiring the materialized route to the caller processor, we need to be able to identify the input and output endpoint of the route and this is done by using kamele:source to mark the input endpoint and kamelet:sink for the output endpoint. Then the template can be instantiated and invoked as shown below: from("direct:setMyBody") .to("kamelet:setMyBody?bodyValue=myKamelet"); Behind the scenes, the Kamelet component does the following things: It instantiates a route out of the Route Template identified by the given templateId path parameter (in this case setBody ) It will act like the direct component and connect the current route to the materialized one. If you had to do it programmatically, it would have been something like: routeTemplate("setMyBody") .templateParameter("bodyValue") .from("direct:{{foo}}") .setBody().constant("{{bodyValue}}"); TemplatedRouteBuilder.builder(context, "setMyBody") .parameter("foo", "bar") .parameter("bodyValue", "myKamelet") .add(); from("direct:template") .to("direct:bar"); 35.7. Spring Boot Auto-Configuration When using kamelet with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kamelet-starter</artifactId> </dependency> The component supports 10 options, which are listed below. Name Description Default Type camel.component.kamelet.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kamelet.block If sending a message to a kamelet endpoint which has no active consumer, then we can tell the producer to block and wait for the consumer to become active. true Boolean camel.component.kamelet.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kamelet.enabled Whether to enable auto configuration of the kamelet component. This is enabled by default. Boolean camel.component.kamelet.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kamelet.location The location(s) of the Kamelets on the file system. Multiple locations can be set separated by comma. classpath:/kamelets String camel.component.kamelet.route-properties Set route local parameters. Map camel.component.kamelet.route-template-loader-listener To plugin a custom listener for when the Kamelet component is loading Kamelets from external resources. The option is a org.apache.camel.spi.RouteTemplateLoaderListener type. RouteTemplateLoaderListener camel.component.kamelet.template-properties Set template local parameters. Map camel.component.kamelet.timeout The timeout value to use if block is enabled. 30000 Long
[ "kamelet:templateId/routeId[?options]", "kamelet:templateId/routeId", "routeTemplate(\"setMyBody\") .templateParameter(\"bodyValue\") .from(\"kamelet:source\") .setBody().constant(\"{{bodyValue}}\");", "from(\"direct:setMyBody\") .to(\"kamelet:setMyBody?bodyValue=myKamelet\");", "routeTemplate(\"setMyBody\") .templateParameter(\"bodyValue\") .from(\"direct:{{foo}}\") .setBody().constant(\"{{bodyValue}}\"); TemplatedRouteBuilder.builder(context, \"setMyBody\") .parameter(\"foo\", \"bar\") .parameter(\"bodyValue\", \"myKamelet\") .add(); from(\"direct:template\") .to(\"direct:bar\");", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kamelet-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-kamelet-component-starter
8.2. Monitoring and Diagnosing Performance Problems
8.2. Monitoring and Diagnosing Performance Problems Red Hat Enterprise Linux 7 provides a number of tools that are useful for monitoring system performance and diagnosing performance problems related to I/O and file systems and their configuration. This section outlines the available tools and gives examples of how to use them to monitor and diagnose I/O and file system related performance issues. 8.2.1. Monitoring System Performance with vmstat Vmstat reports on processes, memory, paging, block I/O, interrupts, and CPU activity across the entire system. It can help administrators determine whether the I/O subsystem is responsible for any performance issues. The information most relevant to I/O performance is in the following columns: si Swap in, or reads from swap space, in KB. so Swap out, or writes to swap space, in KB. bi Block in, or block write operations, in KB. bo Block out, or block read operations, in KB. wa The portion of the queue that is waiting for I/O operations to complete. Swap in and swap out are particularly useful when your swap space and your data are on the same device, and as indicators of memory usage. Additionally, the free, buff, and cache columns can help identify write-back frequency. A sudden drop in cache values and an increase in free values indicates that write-back and page cache invalidation has begun. If analysis with vmstat shows that the I/O subsystem is responsible for reduced performance, administrators can use iostat to determine the responsible I/O device. vmstat is provided by the procps-ng package. For detailed information about using vmstat , see the man page: 8.2.2. Monitoring I/O Performance with iostat Iostat is provided by the sysstat package. It reports on I/O device load in your system. If analysis with vmstat shows that the I/O subsystem is responsible for reduced performance, you can use iostat to determine the I/O device responsible. You can focus the output of iostat reports on a specific device by using the parameters defined in the iostat man page: 8.2.2.1. Detailed I/O Analysis with blktrace Blktrace provides detailed information about how time is spent in the I/O subsystem. The companion utility blkparse reads the raw output from blktrace and produces a human readable summary of input and output operations recorded by blktrace . For more detailed information about this tool, see the blktrace (8) and blkparse (1) man pages: 8.2.2.2. Analyzing blktrace Output with btt The btt utility is provided as part of the blktrace package. It analyzes blktrace output and displays the amount of time that data spends in each area of the I/O stack, making it easier to spot bottlenecks in the I/O subsystem. Some of the important events tracked by the blktrace mechanism and analyzed by btt are: Queuing of the I/O event ( Q ) Dispatch of the I/O to the driver event ( D ) Completion of I/O event ( C ) You can include or exclude factors involved with I/O performance issues by examining combinations of events. To inspect the timing of sub-portions of each I/O device, look at the timing between captured blktrace events for the I/O device. For example, the following command reports the total amount of time spent in the lower part of the kernel I/O stack ( Q2C ), which includes scheduler, driver, and hardware layers, as an average under await time: If the device takes a long time to service a request ( D2C ), the device may be overloaded, or the workload sent to the device may be sub-optimal. If block I/O is queued for a long time before being dispatched to the storage device ( Q2G ), it may indicate that the storage in use is unable to serve the I/O load. For example, a LUN queue full condition has been reached and is preventing the I/O from being dispatched to the storage device. Looking at the timing across adjacent I/O can provide insight into some types of bottleneck situations. For example, if btt shows that the time between requests being sent to the block layer ( Q2Q ) is larger than the total time that requests spent in the block layer ( Q2C ), this indicates that there is idle time between I/O requests and the I/O subsystem may not be responsible for performance issues. Comparing Q2C values across adjacent I/O can show the amount of variability in storage service time. The values can be either: fairly consistent with a small range, or highly variable in the distribution range, which indicates a possible storage device side congestion issue. For more detailed information about this tool, see the btt (1) man page: 8.2.2.3. Analyzing blktrace Output with iowatcher The iowatcher tool can use blktrace output to graph I/O over time. It focuses on the Logical Block Address (LBA) of disk I/O, throughput in megabytes per second, the number of seeks per second, and I/O operations per second. This can help to identify when you are hitting the operations-per-second limit of a device. For more detailed information about this tool, see the iowatcher (1) man page. 8.2.3. Storage Monitoring with SystemTap The Red Hat Enterprise Linux 7 SystemTap Beginners Guide includes several sample scripts that are useful for profiling and monitoring storage performance. The following SystemTap example scripts relate to storage performance and may be useful in diagnosing storage or file system performance problems. By default they are installed to the /usr/share/doc/systemtap-client/examples/io directory. disktop.stp Checks the status of reading/writing disk every 5 seconds and outputs the top ten entries during that period. iotime.stp Prints the amount of time spent on read and write operations, and the number of bytes read and written. traceio.stp Prints the top ten executables based on cumulative I/O traffic observed, every second. traceio2.stp Prints the executable name and process identifier as reads and writes to the specified device occur. inodewatch.stp Prints the executable name and process identifier each time a read or write occurs to the specified inode on the specified major/minor device. inodewatch2.stp Prints the executable name, process identifier, and attributes each time the attributes are changed on the specified inode on the specified major/minor device.
[ "man vmstat", "man iostat", "man blktrace", "man blkparse", "iostat -x [...] Device: await r_await w_await vda 16.75 0.97 162.05 dm-0 30.18 1.13 223.45 dm-1 0.14 0.14 0.00 [...]", "man btt" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Storage_and_File_Systems-Monitoring_and_diagnosing_performance_problems
Chapter 2. Installing and configuring the resource-optimization components
Chapter 2. Installing and configuring the resource-optimization components Installing resource optimization involves installing packages, configuring settings and enabling local services. This can be done manually, or with an Ansible playbook provided by Red Hat. Note Pay as you go (PAYG) customers need to register the Insights client with subscription-manager (RHSM). There are two ways to register with subscription-manager: Using activation keys (recommended) Using your user name and password For more information about how to register the Insights client, refer to Client Configuration Guide for Red Hat Insights . Table 2.1. Compatibility information RHEL Versions Cloud Provider Resource Optimization Compatibility 8.x-9.x AWS Yes (x86_64 and ARM 64-bit) 7.7-7.9 AWS Yes (x86_64 and ARM 64-bit) 7.0-7.6 AWS No 6.x AWS No Prerequisites The following applications and configurations need to be installed or confirmed before the resource optimization service can be used: Cloud marketplace RHEL instance is configured. The Insights client is installed on the system and is operational. If you want to use Ansible to install or uninstall the resource optimization service: The Ansible repository is enabled and the Ansible client is installed on each system. The system administrator can run Ansible Playbooks. 2.1. Installing resource-optimization components There are a few options for installing resource-optimization components. Choose whichever works with your Ansible workflow. 2.1.1. Installing Ansible and running the resource-optimization installation playbook The use of Ansible is recommended to expedite the installation process. This procedure installs the Ansible client and runs the Ansible Playbook on your system. Cloud marketplace images on Amazon Web Services (AWS) are configured to use repositories hosted by the cloud provider. Currently, these repositories do not contain the Ansible client, so you must perform the following steps to enable the Ansible repository on your cloud marketplace - managed RHEL system. Note On RHEL 8.6 and later, and RHEL 9.0, Red Hat recommends using Ansible Core. For more information, see Updates to using Ansible in RHEL 8.6 and 9.0 . Prerequisites On RHEL 8, the Ansible repository is enabled. Procedure on RHEL 8 Install Ansible: Procedure on RHEL 7 Enable the Subscription-Manager repository and register the system Optionally, attach your system to a subscription pool Enable the required Ansible repository. Install Ansible: If you are using RHEL PAYG and want to use RHUI update servers only, disable the Subscription-Manager repository: 2.1.2. Installing resource optimization when Ansible is already installed Once Ansible is installed, proceed to complete the installation of the resource optimization service. Procedure Download the Ansible Playbook with the following command: Set localhost in Ansible inventory by appending the line localhost to /etc/ansible/hosts . Run the Ansible Playbook: The system will show in Insights immediately in a "Waiting for data" state, and data and suggestions will be available the day after registering. Verification step Data files with a timestamp will appear under /var/log/pcp/pmlogger/ros and after a few minutes, you can verify metrics are being collected: 2.1.3. Installing resource optimization without installing or using Ansible Procedure If you choose not to use Ansible for installation, use the following manual installation procedure: . Ensure the latest version of insights-client is installed. Set core_collect=True in /etc/insights-client/insights-client.conf Install the Performance Co-Pilot (PCP) toolkit. Create the PCP configuration file /var/lib/pcp/config/pmlogger/config.ros with this content: To configure pmlogger to gather the metrics required by resource optimization, add this line to /etc/pcp/pmlogger/control.d/local : Note In versions of this procedure, this line began with LOCALHOSTNAME n y . The procedure now advises that you use LOCALHOSTNAME n n , which disables the usage of pmsocks . For more information about pmsocks , refer to the man page for pmsocks . Start and enable the required PCP services. Re-register insights-client and upload the archive. The system will show in Insights immediately in a "Waiting for data" state, and data and suggestions will be available the day after registering. Verification step Data files with a timestamp will appear under /var/log/pcp/pmlogger/ros and after a few minutes, you can verify metrics are being collected: 2.2. Enabling Kernel Pressure Stall Information (PSI) PSI provides a canonical way to see resource pressure increases as they develop. There are pressure metrics for three major resources: memory, CPU, and input/output (I/O). PSI is available on RHEL 8 and newer versions, and is disabled by default. When PSI is enabled, the resource optimization service can augment its findings and provide more details and better suggestions. Enabling PSI is strongly recommended to identify peaks. Procedure Edit the /etc/default/grub file and append psi=1 at the end of the GRUB_CMDLINE_LINUX line (mind the quotes). Regenerate the grub configuration file. Reboot the system. Note Enabling PSI incurs in a slight (<1%) performance hit. Verification step When PSI is enabled, files for CPU, memory and IO appear under /proc/pressure . 2.3. Enabling notifications and integrations in the resource optimization service You can enable the notifications service on Red Hat Hybrid Cloud Console to send notifications whenever the resource optimization service detects an issue and generates a suggestion. Using the notifications service frees you from having to continually check the Red Hat Insights for Red Hat Enterprise Linux dashboard for recommendations. For example, you can configure the notifications service to automatically send an email message whenever the resource optimization service generates a suggestion. Enabling the notifications service requires three main steps: First, an Organization Administrator creates a User access group with the Notifications administrator role, and then adds account members to the group. , a Notifications administrator sets up behavior groups for events in the notifications service. Behavior groups specify the delivery method for each notification. For example, a behavior group can specify whether email notifications are sent to all users, or just to Organization administrators. Finally, users who receive email notifications from events must set their user preferences so that they receive individual emails for each event. In addition to sending email messages, you can configure the notifications service to pull event data in other ways: Using an authenticated client to query Red Hat Insights APIs for event data. Using webhooks to send events to third-party applications that accept inbound requests. Integrating notifications with applications such as Splunk to route resource optimization recommendations to the application dashboard. Additional resources For more information about how to set up notifications for resource optimization recommendations, see Configuring notifications on the Red Hat Hybrid Cloud Console and Integrating the Red Hat Hybrid Cloud Console with third-party applications .
[ "yum install ansible-core -y", "subscription-manager config --rhsm.manage_repos=1 subscription-manager register", "subscription-manager attach --pool xxxxxxxx", "subscription-manager repos --enable=rhel-7-server-ansible-2.9-rpms", "yum install ansible -y", "subscription-manager config --rhsm.manage_repos=0", "curl -O https://raw.githubusercontent.com/RedHatInsights/ros-backend/v2.0/ansible-playbooks/ros_install_and_set_up.yml", "ansible-playbook -c local ros_install_and_set_up.yml", "ls -l /var/log/pcp/pmlogger/ros pmlogsummary /var/log/pcp/pmlogger/ros/", "yum update insights-client", "sudo yum install pcp", "log mandatory on default { hinv.ncpu mem.physmem mem.util.available disk.dev.total kernel.all.cpu.idle kernel.all.pressure.cpu.some.avg kernel.all.pressure.io.full.avg kernel.all.pressure.io.some.avg kernel.all.pressure.memory.full.avg kernel.all.pressure.memory.some.avg } [access] disallow .* : all; disallow :* : all; allow local:* : enquire;", "LOCALHOSTNAME n n PCP_LOG_DIR/pmlogger/ros -r -T24h10m -c config.ros -v 100Mb", "sudo systemctl enable pmcd pmlogger sudo systemctl start pmcd pmlogger", "sudo insights-client --register", "ls -l /var/log/pcp/pmlogger/ros pmlogsummary /var/log/pcp/pmlogger/ros/", "sudo grub2-mkconfig -o /boot/grub2/grub.cfg" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_rhel_resource_optimization_with_insights_for_red_hat_enterprise_linux/assembly-ros-install
Chapter 10. DeploymentConfigRollback [apps.openshift.io/v1]
Chapter 10. DeploymentConfigRollback [apps.openshift.io/v1] Description DeploymentConfigRollback provides the input to rollback generation. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required name spec 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds name string Name of the deployment config that will be rolled back. spec object DeploymentConfigRollbackSpec represents the options for rollback generation. updatedAnnotations object (string) UpdatedAnnotations is a set of new annotations that will be added in the deployment config. 10.1.1. .spec Description DeploymentConfigRollbackSpec represents the options for rollback generation. Type object Required from includeTriggers includeTemplate includeReplicationMeta includeStrategy Property Type Description from ObjectReference From points to a ReplicationController which is a deployment. includeReplicationMeta boolean IncludeReplicationMeta specifies whether to include the replica count and selector. includeStrategy boolean IncludeStrategy specifies whether to include the deployment Strategy. includeTemplate boolean IncludeTemplate specifies whether to include the PodTemplateSpec. includeTriggers boolean IncludeTriggers specifies whether to include config Triggers. revision integer Revision to rollback to. If set to 0, rollback to the last revision. 10.2. API endpoints The following API endpoints are available: /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/rollback POST : create rollback of a DeploymentConfig 10.2.1. /apis/apps.openshift.io/v1/namespaces/{namespace}/deploymentconfigs/{name}/rollback Table 10.1. Global path parameters Parameter Type Description name string name of the DeploymentConfigRollback Table 10.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create rollback of a DeploymentConfig Table 10.3. Body parameters Parameter Type Description body DeploymentConfigRollback schema Table 10.4. HTTP responses HTTP code Reponse body 200 - OK DeploymentConfigRollback schema 201 - Created DeploymentConfigRollback schema 202 - Accepted DeploymentConfigRollback schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/workloads_apis/deploymentconfigrollback-apps-openshift-io-v1
Chapter 10. Migrating your applications
Chapter 10. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or from the command line . You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. During migration, MTC preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 10.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure internal registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Internal images If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster. You can manually update an image stream tag in order to use a deprecated OpenShift Container Platform 3 image on an OpenShift Container Platform 4.9 cluster. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 3 cluster: 8443 (API server) 443 (routes) 53 (DNS) You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. Additional resources for migration prerequisites Manually exposing a secure registry for OpenShift Container Platform 3 Updating deprecated internal images 10.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 10.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 10.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc sa get-token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. Require SSL verification : Optional: Select this option to verify SSL connections to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 10.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 10.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources MTC file system copy method MTC snapshot copy method 10.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc sa get-token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/migrating_from_version_3_to_4/migrating-applications-3-4
Chapter 5. AWS 2 Lambda
Chapter 5. AWS 2 Lambda Only producer is supported The AWS2 Lambda component supports create, get, list, delete and invoke AWS Lambda functions. Prerequisites You must have a valid Amazon Web Services developer account, and be signed up to use Amazon Lambda. More information is available at AWS Lambda . When creating a Lambda function, you need to specify a IAM role which has at least the AWSLambdaBasicExecuteRole policy attached. 5.1. URI Format You can append query options to the URI in the following format, options=value&option2=value&... 5.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 5.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 5.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 5.3. Component Options The AWS Lambda component supports 16 options, which are listed below. Name Description Default Type configuration (producer) Component configuration. Lambda2Configuration lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean operation (producer) The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction. Enum values: listFunctions getFunction createAlias deleteAlias getAlias listAliases createFunction deleteFunction invokeFunction updateFunction createEventSourceMapping deleteEventSourceMapping listEventSourceMapping listTags tagResource untagResource publishVersion listVersions invokeFunction Lambda2Operations overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean pojoRequest (producer) If we want to use a POJO request as body or not. false boolean region (producer) The region in which Lambda client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the Lambda client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean awsLambdaClient (advanced) Autowired To use a existing configured AwsLambdaClient as client. LambdaClient proxyHost (proxy) To define a proxy host when instantiating the Lambda client. String proxyPort (proxy) To define a proxy port when instantiating the Lambda client. Integer proxyProtocol (proxy) To define a proxy protocol when instantiating the Lambda client. Enum values: HTTP HTTPS HTTPS Protocol accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String 5.4. Endpoint Options The AWS Lambda endpoint is configured using URI syntax: with the following path and query parameters: 5.4.1. Path Parameters (1 parameters) Name Description Default Type function (producer) Required Name of the Lambda function. String 5.4.2. Query Parameters (14 parameters) Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean operation (producer) The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction. Enum values: listFunctions getFunction createAlias deleteAlias getAlias listAliases createFunction deleteFunction invokeFunction updateFunction createEventSourceMapping deleteEventSourceMapping listEventSourceMapping listTags tagResource untagResource publishVersion listVersions invokeFunction Lambda2Operations overrideEndpoint (producer) Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false boolean pojoRequest (producer) If we want to use a POJO request as body or not. false boolean region (producer) The region in which Lambda client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String trustAllCertificates (producer) If we want to trust all certificates in case of overriding the endpoint. false boolean uriEndpointOverride (producer) Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String useDefaultCredentialsProvider (producer) Set whether the Lambda client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false boolean awsLambdaClient (advanced) Autowired To use a existing configured AwsLambdaClient as client. LambdaClient proxyHost (proxy) To define a proxy host when instantiating the Lambda client. String proxyPort (proxy) To define a proxy port when instantiating the Lambda client. Integer proxyProtocol (proxy) To define a proxy protocol when instantiating the Lambda client. Enum values: HTTP HTTPS HTTPS Protocol accessKey (security) Amazon AWS Access Key. String secretKey (security) Amazon AWS Secret Key. String Required Lambda component options You have to provide the awsLambdaClient in the Registry or your accessKey and secretKey to access the Amazon Lambda service.. 5.5. Usage 5.5.1. Static credentials vs Default Credential Provider You have the possibility of avoiding the usage of explicit static credentials, by specifying the useDefaultCredentialsProvider option and set it to true. Java system properties - aws.accessKeyId and aws.secretKey Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Web Identity Token from AWS STS. The shared credentials and config files. Amazon ECS container credentials - loaded from the Amazon ECS if the environment variable AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set. Amazon EC2 Instance profile credentials. For more information about this you can look at AWS credentials documentation 5.5.2. Message headers evaluated by the Lambda producer Operation Header Type Description Required All CamelAwsLambdaOperation String The operation we want to perform. Override operation passed as query parameter Yes createFunction CamelAwsLambdaS3Bucket String Amazon S3 bucket name where the .zip file containing your deployment package is stored. This bucket must reside in the same AWS region where you are creating the Lambda function. No createFunction CamelAwsLambdaS3Key String The Amazon S3 object (the deployment package) key name you want to upload. No createFunction CamelAwsLambdaS3ObjectVersion String The Amazon S3 object (the deployment package) version you want to upload. No createFunction CamelAwsLambdaZipFile String The local path of the zip file (the deployment package). Content of zip file can also be put in Message body. No createFunction CamelAwsLambdaRole String The Amazon Resource Name (ARN) of the IAM role that Lambda assumes when it executes your function to access any other Amazon Web Services (AWS) resources. Yes createFunction CamelAwsLambdaRuntime String The runtime environment for the Lambda function you are uploading. (nodejs, nodejs4.3, nodejs6.10, java8, python2.7, python3.6, dotnetcore1.0, odejs4.3-edge) Yes createFunction CamelAwsLambdaHandler String The function within your code that Lambda calls to begin execution. For Node.js, it is the module-name.export value in your function. For Java, it can be package.class-name::handler or package.class-name. Yes createFunction CamelAwsLambdaDescription String The user-provided description. No createFunction CamelAwsLambdaTargetArn String The parent object that contains the target ARN (Amazon Resource Name) of an Amazon SQS queue or Amazon SNS topic. No createFunction CamelAwsLambdaMemorySize Integer The memory size, in MB, you configured for the function. Must be a multiple of 64 MB. No createFunction CamelAwsLambdaKMSKeyArn String The Amazon Resource Name (ARN) of the KMS key used to encrypt your function's environment variables. If not provided, AWS Lambda will use a default service key. No createFunction CamelAwsLambdaPublish Boolean This boolean parameter can be used to request AWS Lambda to create the Lambda function and publish a version as an atomic operation. No createFunction CamelAwsLambdaTimeout Integer The function execution time at which Lambda should terminate the function. The default is 3 seconds. No createFunction CamelAwsLambdaTracingConfig String Your function's tracing settings (Active or PassThrough). No createFunction CamelAwsLambdaEnvironmentVariables Map<String, String> The key-value pairs that represent your environment's configuration settings. No createFunction CamelAwsLambdaEnvironmentTags Map<String, String> The list of tags (key-value pairs) assigned to the new function. No createFunction CamelAwsLambdaSecurityGroupIds List<String> If your Lambda function accesses resources in a VPC, a list of one or more security groups IDs in your VPC. No createFunction CamelAwsLambdaSubnetIds List<String> If your Lambda function accesses resources in a VPC, a list of one or more subnet IDs in your VPC. No createAlias CamelAwsLambdaFunctionVersion String The function version to set in the alias Yes createAlias CamelAwsLambdaAliasFunctionName String The function name to set in the alias Yes createAlias CamelAwsLambdaAliasFunctionDescription String The function description to set in the alias No deleteAlias CamelAwsLambdaAliasFunctionName String The function name of the alias Yes getAlias CamelAwsLambdaAliasFunctionName String The function name of the alias Yes listAliases CamelAwsLambdaFunctionVersion String The function version to set in the alias No 5.6. List of Avalaible Operations listFunctions getFunction createFunction deleteFunction invokeFunction updateFunction createEventSourceMapping deleteEventSourceMapping listEventSourceMapping listTags tagResource untagResource publishVersion listVersions createAlias deleteAlias getAlias listAliases 5.7. Examples 5.7.1. Producer Example To have a full understanding of how the component works, you may have a look at these integration tests . 5.7.2. Producer Examples CreateFunction: this operation will create a function for you in AWS Lambda from("direct:createFunction").to("aws2-lambda://GetHelloWithName?operation=createFunction").to("mock:result"); and by sending template.send("direct:createFunction", ExchangePattern.InOut, new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(Lambda2Constants.RUNTIME, "nodejs6.10"); exchange.getIn().setHeader(Lambda2Constants.HANDLER, "GetHelloWithName.handler"); exchange.getIn().setHeader(Lambda2Constants.DESCRIPTION, "Hello with node.js on Lambda"); exchange.getIn().setHeader(Lambda2Constants.ROLE, "arn:aws:iam::643534317684:role/lambda-execution-role"); ClassLoader classLoader = getClass().getClassLoader(); File file = new File( classLoader .getResource("org/apache/camel/component/aws2/lambda/function/node/GetHelloWithName.zip") .getFile()); FileInputStream inputStream = new FileInputStream(file); exchange.getIn().setBody(inputStream); } }); 5.8. Using a POJO as body Sometimes build an AWS Request can be complex, because of multiple options. We introduce the possibility to use a POJO as body. In AWS Lambda there are multiple operations you can submit, as an example for Get Function request, you can do something like: from("direct:getFunction") .setBody(GetFunctionRequest.builder().functionName("test").build()) .to("aws2-lambda://GetHelloWithName?awsLambdaClient=#awsLambdaClient&operation=getFunction&pojoRequest=true") In this way you'll pass the request directly without the need of passing headers and options specifically related to this operation. 5.9. Dependencies Maven users will need to add the following dependency to their pom.xml. pom.xml <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-lambda</artifactId> <version>USD{camel-version}</version> </dependency> where 3.18.3 must be replaced by the actual version of Camel. 5.10. Spring Boot Auto-Configuration When using aws2-lambda with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-lambda-starter</artifactId> </dependency> The component supports 17 options, which are listed below. Name Description Default Type camel.component.aws2-lambda.access-key Amazon AWS Access Key. String camel.component.aws2-lambda.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.aws2-lambda.aws-lambda-client To use a existing configured AwsLambdaClient as client. The option is a software.amazon.awssdk.services.lambda.LambdaClient type. LambdaClient camel.component.aws2-lambda.configuration Component configuration. The option is a org.apache.camel.component.aws2.lambda.Lambda2Configuration type. Lambda2Configuration camel.component.aws2-lambda.enabled Whether to enable auto configuration of the aws2-lambda component. This is enabled by default. Boolean camel.component.aws2-lambda.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.aws2-lambda.operation The operation to perform. It can be listFunctions, getFunction, createFunction, deleteFunction or invokeFunction. Lambda2Operations camel.component.aws2-lambda.override-endpoint Set the need for overidding the endpoint. This option needs to be used in combination with uriEndpointOverride option. false Boolean camel.component.aws2-lambda.pojo-request If we want to use a POJO request as body or not. false Boolean camel.component.aws2-lambda.proxy-host To define a proxy host when instantiating the Lambda client. String camel.component.aws2-lambda.proxy-port To define a proxy port when instantiating the Lambda client. Integer camel.component.aws2-lambda.proxy-protocol To define a proxy protocol when instantiating the Lambda client. Protocol camel.component.aws2-lambda.region The region in which Lambda client needs to work. When using this parameter, the configuration will expect the lowercase name of the region (for example ap-east-1) You'll need to use the name Region.EU_WEST_1.id(). String camel.component.aws2-lambda.secret-key Amazon AWS Secret Key. String camel.component.aws2-lambda.trust-all-certificates If we want to trust all certificates in case of overriding the endpoint. false Boolean camel.component.aws2-lambda.uri-endpoint-override Set the overriding uri endpoint. This option needs to be used in combination with overrideEndpoint option. String camel.component.aws2-lambda.use-default-credentials-provider Set whether the Lambda client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. false Boolean
[ "aws2-lambda://functionName[?options]", "aws2-lambda:function", "from(\"direct:createFunction\").to(\"aws2-lambda://GetHelloWithName?operation=createFunction\").to(\"mock:result\");", "template.send(\"direct:createFunction\", ExchangePattern.InOut, new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(Lambda2Constants.RUNTIME, \"nodejs6.10\"); exchange.getIn().setHeader(Lambda2Constants.HANDLER, \"GetHelloWithName.handler\"); exchange.getIn().setHeader(Lambda2Constants.DESCRIPTION, \"Hello with node.js on Lambda\"); exchange.getIn().setHeader(Lambda2Constants.ROLE, \"arn:aws:iam::643534317684:role/lambda-execution-role\"); ClassLoader classLoader = getClass().getClassLoader(); File file = new File( classLoader .getResource(\"org/apache/camel/component/aws2/lambda/function/node/GetHelloWithName.zip\") .getFile()); FileInputStream inputStream = new FileInputStream(file); exchange.getIn().setBody(inputStream); } });", "from(\"direct:getFunction\") .setBody(GetFunctionRequest.builder().functionName(\"test\").build()) .to(\"aws2-lambda://GetHelloWithName?awsLambdaClient=#awsLambdaClient&operation=getFunction&pojoRequest=true\")", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws2-lambda</artifactId> <version>USD{camel-version}</version> </dependency>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-aws2-lambda-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-aws2-lambda-component-starter
Preface
Preface Red Hat Quay container registry platform provides secure storage, distribution, and governance of containers and cloud-native artifacts on any infrastructure. It is available as a standalone component or as an Operator on OpenShift Container Platform. Red Hat Quay includes the following features and benefits: Granular security management Fast and robust at any scale High velocity CI/CD Automated installation and upates Enterprise authentication and team-based access control OpenShift Container Platform integration Red Hat Quay is regularly released, containing new features, bug fixes, and software updates. To upgrade Red Hat Quay for both standalone and OpenShift Container Platform deployments, see Upgrade Red Hat Quay . Important Red Hat Quay only supports rolling back, or downgrading, to z-stream versions, for example, 3.7.2 3.7.1. Rolling back to y-stream versions (3.7.0 3.6.0) is not supported. This is because Red Hat Quay updates might contain database schema upgrades that are applied when upgrading to a new version of Red Hat Quay. Database schema upgrades are not considered backwards compatible. Downgrading to z-streams is neither recommended nor supported by either Operator based deployments or virtual machine based deployments. Downgrading should only be done in extreme circumstances. The decision to rollback your Red Hat Quay deployment must be made in conjunction with the Red Hat Quay support and development teams. For more information, contact Red Hat Quay support. Documentation for Red Hat Quay is versioned with each release. The latest Red Hat Quay documentation is available from the Red Hat Quay Documentation page. Currently, version 3 is the latest major version. Note Prior to version 2.9.2, Red Hat Quay was called Quay Enterprise. Documentation for 2.9.2 and prior versions are archived on the Product Documentation for Red Hat Quay 2.9 page.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_release_notes/pr01
Chapter 23. Virtualization
Chapter 23. Virtualization USB 3.0 host adapter (xHCI) emulation, see the section called "USB 3.0 Support for KVM Guests" Open Virtual Machine Firmware (OVMF), see the section called "Open Virtual Machine Firmware" LPAR Watchdog for IBM System z, see the section called "LPAR Watchdog for IBM System z"
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-tp-virtualization
Chapter 40. Controlling the SCSI Command Timer and Device Status
Chapter 40. Controlling the SCSI Command Timer and Device Status The Linux SCSI layer sets a timer on each command. When this timer expires, the SCSI layer will quiesce the host bus adapter (HBA) and wait for all outstanding commands to either time out or complete. Afterwards, the SCSI layer will activate the driver's error handler. When the error handler is triggered, it attempts the following operations in order (until one successfully executes): Abort the command. Reset the device. Reset the bus. Reset the host. If all of these operations fail, the device will be set to the offline state. When this occurs, all I/O to that device will be failed, until the problem is corrected and the user sets the device to running . The process is different, however, if a device uses the fibre channel protocol and the rport is blocked. In such cases, the drivers wait for several seconds for the rport to become online again before activating the error handler. This prevents devices from becoming offline due to temporary transport problems. Device States To display the state of a device, use: To set a device to running state, use: Command Timer To control the command timer, you can write to /sys/block/ device-name /device/timeout . To do so, run: echo value /sys/block/ device-name /device/timeout Here, value is the timeout value (in seconds) you want to implement.
[ "cat /sys/block/ device-name /device/state", "echo running > /sys/block/ device-name /device/state" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/scsi-command-timer-device-status
Chapter 7. Configuring the hostname (v2)
Chapter 7. Configuring the hostname (v2) 7.1. The importance of setting the hostname option By default, Red Hat build of Keycloak mandates the configuration of the hostname option and does not dynamically resolve URLs. This is a security measure. Red Hat build of Keycloak freely discloses its own URLs, for instance through the OIDC Discovery endpoint, or as part of the password reset link in an email. If the hostname was dynamically interpreted from a hostname header, it could provide a potential attacker with an opportunity to manipulate a URL in the email, redirect a user to the attacker's fake domain, and steal sensitive data such as action tokens, passwords, etc. By explicitly setting the hostname option, we avoid a situation where tokens could be issued by a fraudulent issuer. The server can be started with an explicit hostname using the following command: bin/kc.[sh|bat] start --hostname my.keycloak.org Note The examples start the Red Hat build of Keycloak instance in production mode, which requires a public certificate and private key in order to secure communications. For more information, refer to the Configuring Red Hat build of Keycloak for production . 7.2. Defining specific parts of the hostname option As demonstrated in the example, the scheme and port are not explicitly required. In such cases, Red Hat build of Keycloak automatically handles these aspects. For instance, the server would be accessible at https://my.keycloak.org:8443 in the given example. However, a reverse proxy will typically expose Red Hat build of Keycloak at the default ports, e.g. 443 . In that case it's desirable to specify the full URL in the hostname option rather than keeping the parts of the URL dynamic. The server can then be started with: bin/kc.[sh|bat] start --hostname https://my.keycloak.org Similarly, your reverse proxy might expose Red Hat build of Keycloak at a different context path. It is possible to configure Red Hat build of Keycloak to reflect that via the hostname and hostname-admin options. See the following example: bin/kc.[sh|bat] start --hostname https://my.keycloak.org:123/auth 7.3. Utilizing an internal URL for communication among clients Red Hat build of Keycloak has the capability to offer a separate URL for backchannel requests, enabling internal communication while maintaining the use of a public URL for frontchannel requests. Moreover, the backchannel is dynamically resolved based on incoming headers. Consider the following example: bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true In this manner, your applications, referred to as clients, can connect with Red Hat build of Keycloak through your local network, while the server remains publicly accessible at https://my.keycloak.org . 7.4. Using edge TLS termination As you can observe, the HTTPS protocol is the default choice, adhering to Red Hat build of Keycloak's commitment to security best practices. However, Red Hat build of Keycloak also provides the flexibility for users to opt for HTTP if necessary. This can be achieved simply by specifying the HTTP listener, consult the Configuring TLS for details. With an edge TLS-termination proxy you can start the server as follows: bin/kc.[sh|bat] start --hostname https://my.keycloak.org --http-enabled true The result of this configuration is that you can continue to access Red Hat build of Keycloak at https://my.keycloak.org via HTTPS, while the proxy interacts with the instance using HTTP and port 8080 . 7.5. Using a reverse proxy When a proxy is forwarding http or reencrypted TLS requests, the proxy-headers option should be set. Depending on the hostname settings, some or all of the URL, may be dynamically determined. Warning If either forwarded or xforwarded is selected, make sure your reverse proxy properly sets and overwrites the Forwarded or X-Forwarded-* headers respectively. To set these headers, consult the documentation for your reverse proxy. Misconfiguration will leave Red Hat build of Keycloak exposed to security vulnerabilities. 7.5.1. Fully dynamic URLs. For example if your reverse proxy correctly sets the Forwarded header, and you don't want to hardcode the hostname, Red Hat build of Keycloak can accommodate this. You simply need to initiate the server as follows: bin/kc.[sh|bat] start --hostname-strict false --proxy-headers forwarded With this configuration, the server respects the value set by the Forwarded header. This also implies that all endpoints are dynamically resolved. 7.5.2. Partially dynamic URLs The proxy-headers option can be also used to resolve the URL partially dynamically when the hostname option is not specified as a full URL. For example: bin/kc.[sh|bat] start --hostname my.keycloak.org --proxy-headers xforwarded In this case, scheme, and port are resolved dynamically from X-Forwarded-* headers, while hostname is statically defined as my.keycloak.org . 7.5.3. Fixed URLs The proxy-headers is still relevant even when the hostname is set to a full URL as the headers are used to determine the origin of the request. For example: bin/kc.[sh|bat] start --hostname https://my.keycloak.org --proxy-headers xforwarded In this case, while nothing is dynamically resolved from the X-Forwarded-* headers, the X-Forwarded-* headers are used to determine the correct origin of the request. 7.6. Exposing the Administration Console on a separate hostname If you wish to expose the Admin Console on a different host, you can do so with the following command: bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443 This allows you to access Red Hat build of Keycloak at https://my.keycloak.org and the Admin Console at https://admin.my.keycloak.org:8443 , while the backend continues to use https://my.keycloak.org . Note Keep in mind that hostname and proxy options do not change the ports on which the server listens. Instead it changes only the ports of static resources like JavaScript and CSS links, OIDC well-known endpoints, redirect URIs, etc. that will be used in front of the proxy. You need to use HTTP configuration options to change the actual ports the server is listening on. Refer to the All configuration for details. Warning Using the hostname-admin option does not prevent accessing the Administration REST API endpoints via the frontend URL specified by the hostname option. If you want to restrict access to the Administration REST API, you need to do it on the reverse proxy level. Administration Console implicitly accesses the API using the URL as specified by the hostname-admin option. 7.7. Background - server endpoints Red Hat build of Keycloak exposes several endpoints, each with a different purpose. They are typically used for communication among applications or for managing the server. We recognize 3 main endpoint groups: Frontend Backend Administration If you want to work with either of these endpoints, you need to set the base URL. The base URL consists of a several parts: a scheme (e.g. https protocol) a hostname (e.g. example.keycloak.org) a port (e.g. 8443) a path (e.g. /auth) The base URL for each group has an important impact on how tokens are issued and validated, on how links are created for actions that require the user to be redirected to Red Hat build of Keycloak (for example, when resetting password through email links), and, most importantly, how applications will discover these endpoints when fetching the OpenID Connect Discovery Document from realms/{realm-name}/.well-known/openid-configuration . 7.7.1. Frontend Users and applications use the frontend URL to access Red Hat build of Keycloak through a front channel. The front channel is a publicly accessible communication channel. For example browser-based flows (accessing the login page, clicking on the link to reset a password or binding the tokens) can be considered as frontchannel requests. In order to make Red Hat build of Keycloak accessible via the frontend URL, you need to set the hostname option: bin/kc.[sh|bat] start --hostname my.keycloak.org 7.7.2. Backend The backend endpoints are those accessible through a public domain or through a private network. They're related to direct backend communication between Red Hat build of Keycloak and a client (an application secured by Red Hat build of Keycloak). Such communication might be over a local network, avoiding a reverse proxy. Examples of the endpoints that belong to this group are the authorization endpoint, token and token introspection endpoint, userinfo endpoint, JWKS URI endpoint, etc. The default value of hostname-backchannel-dynamic option is false , which means that the backchannel URLs are same as the frontchannel URLs. Dynamic resolution of backchannel URLs from incoming request headers can be enabled by setting the following options: bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true Note that hostname option must be set to a URL. For more information, refer to the Section 7.9, "Validations" section below. 7.7.3. Administration Similarly to the base frontend URL, you can also set the base URL for resources and endpoints of the administration console. The server exposes the administration console and static resources using a specific URL. This URL is used for redirect URLs, loading resources (CSS, JS), Administration REST API etc. It can be done by setting the hostname-admin option: bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443 Again, the hostname option must be set to a URL. For more information, refer to the Section 7.9, "Validations" section below. 7.8. Sources for resolving the URL As indicated in the sections, URLs can be resolved in several ways: they can be dynamically generated, hardcoded, or a combination of both: Dynamic from an incoming request: Host header, scheme, server port, context path Proxy-set headers: Forwarded and X-Forwarded-* Hardcoded: Server-wide config (e.g hostname , hostname-admin , etc.) Realm configuration for frontend URL 7.9. Validations hostname URL and hostname-admin URL are verified that full URL is used, incl. scheme and hostname. Port is validated only if present, otherwise default port for given protocol is assumed (80 or 443). In production profile ( kc.sh|bat start ), either --hostname or --hostname-strict false must be explicitly configured. This does not apply for dev profile ( kc.sh|bat start-dev ) where --hostname-strict false is the default value. If --hostname is not configured: hostname-backchannel-dynamic must be set to false. hostname-strict must be set to false. If hostname-admin is configured, hostname must be set to a URL (not just hostname). Otherwise Red Hat build of Keycloak would not know what is the correct frontend URL (incl. port etc.) when accessing the Admin Console. If hostname-backchannel-dynamic is set to true, hostname must be set to a URL (not just hostname). Otherwise Red Hat build of Keycloak would not know what is the correct frontend URL (incl. port etc.) when being access via the dynamically resolved bachchannel. Additionally if hostname is configured, then hostname-strict is ignored. 7.10. Troubleshooting To troubleshoot the hostname configuration, you can use a dedicated debug tool which can be enabled as: Red Hat build of Keycloak configuration: bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true After Red Hat build of Keycloak starts properly, open your browser and go to: http://mykeycloak:8080/realms/<your-realm>/hostname-debug 7.11. Relevant options Table 7.1. By default, this endpoint is disabled (--hostname-debug=false) Value hostname Address at which is the server exposed. Can be a full URL, or just a hostname. When only hostname is provided, scheme, port and context path are resolved from the request. CLI: --hostname Env: KC_HOSTNAME Available only when hostname:v2 feature is enabled hostname-admin Address for accessing the administration console. Use this option if you are exposing the administration console using a reverse proxy on a different address than specified in the hostname option. CLI: --hostname-admin Env: KC_HOSTNAME_ADMIN Available only when hostname:v2 feature is enabled hostname-backchannel-dynamic Enables dynamic resolving of backchannel URLs, including hostname, scheme, port and context path. Set to true if your application accesses Keycloak via a private network. If set to true, hostname option needs to be specified as a full URL. CLI: --hostname-backchannel-dynamic Env: KC_HOSTNAME_BACKCHANNEL_DYNAMIC Available only when hostname:v2 feature is enabled true , false (default) hostname-debug Toggles the hostname debug page that is accessible at /realms/master/hostname-debug. CLI: --hostname-debug Env: KC_HOSTNAME_DEBUG Available only when hostname:v2 feature is enabled true , false (default) hostname-strict Disables dynamically resolving the hostname from request headers. Should always be set to true in production, unless your reverse proxy overwrites the Host header. If enabled, the hostname option needs to be specified. CLI: --hostname-strict Env: KC_HOSTNAME_STRICT Available only when hostname:v2 feature is enabled true (default), false
[ "bin/kc.[sh|bat] start --hostname my.keycloak.org", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org:123/auth", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --http-enabled true", "bin/kc.[sh|bat] start --hostname-strict false --proxy-headers forwarded", "bin/kc.[sh|bat] start --hostname my.keycloak.org --proxy-headers xforwarded", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --proxy-headers xforwarded", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443", "bin/kc.[sh|bat] start --hostname my.keycloak.org", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-backchannel-dynamic true", "bin/kc.[sh|bat] start --hostname https://my.keycloak.org --hostname-admin https://admin.my.keycloak.org:8443", "bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_configuration_guide/hostname-
Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE
Chapter 4. Installing a cluster with RHEL KVM on IBM Z and IBM(R) LinuxONE In OpenShift Container Platform version 4.13, you can install a cluster on IBM Z or IBM(R) LinuxONE infrastructure that you provision. Note While this document refers only to IBM Z, all information in it also applies to IBM(R) LinuxONE. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. You provisioned a RHEL Kernel Virtual Machine (KVM) system that is hosted on the logical partition (LPAR) and based on RHEL 8.6 or later. See Red Hat Enterprise Linux 8 and 9 Life Cycle . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Machine requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. One or more KVM host machines based on RHEL 8.6 or later. Each RHEL KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each RHEL KVM host machine. 4.3.1. Required machines The smallest OpenShift Container Platform clusters require the following hosts: Table 4.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To improve high availability of your cluster, distribute the control plane machines over different RHEL instances on at least two physical machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. See Red Hat Enterprise Linux technology capabilities and limits . 4.3.2. Network connectivity requirements The OpenShift Container Platform installer creates the Ignition files, which are necessary for all the Red Hat Enterprise Linux CoreOS (RHCOS) virtual machines. The automated installation of OpenShift Container Platform is performed by the bootstrap machine. It starts the installation of OpenShift Container Platform on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. 4.3.3. IBM Z network connectivity requirements To install on IBM Z under RHEL KVM, you need: A RHEL KVM host configured with an OSA or RoCE network adapter. Either a RHEL KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. See Types of virtual network connections . 4.3.4. Host machine resource requirements The RHEL KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the OpenShift Container Platform environment. See Getting started with virtualization . You can install OpenShift Container Platform version 4.13 on the following IBM hardware: IBM z16 (all models), IBM z15 (all models), IBM z14 (all models) IBM(R) LinuxONE 4 (all models), IBM(R) LinuxONE III (all models), IBM(R) LinuxONE Emperor II, IBM(R) LinuxONE Rockhopper II 4.3.5. Minimum IBM Z system environment Hardware requirements The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. At least one network connection to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Note You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every OpenShift Container Platform cluster. Important Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the OpenShift Container Platform clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. Operating system requirements One LPAR running on RHEL 8.6 or later with KVM, which is managed by libvirt On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines Two guest virtual machines for OpenShift Container Platform compute machines One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine 4.3.6. Minimum resource requirements Each cluster virtual machine must meet the following minimum requirements: Virtual Machine Operating System vCPU [1] Virtual RAM Storage IOPS Bootstrap RHCOS 4 16 GB 100 GB N/A Control plane RHCOS 4 16 GB 100 GB N/A Compute RHCOS 2 8 GB 100 GB N/A One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. 4.3.7. Preferred IBM Z system environment Hardware requirements Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. Two network connections to both connect to the LoadBalancer service and to serve data for traffic outside the cluster. Operating system requirements For high availability, two or three LPARs running on RHEL 8.6 or later with KVM, which are managed by libvirt. On your RHEL KVM host, set up: Three guest virtual machines for OpenShift Container Platform control plane machines, distributed across the RHEL KVM host machines. At least six guest virtual machines for OpenShift Container Platform compute machines, distributed across the RHEL KVM host machines. One guest virtual machine for the temporary OpenShift Container Platform bootstrap machine. To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using cpu_shares . Do the same for infrastructure nodes, if they exist. See schedinfo in IBM Documentation. 4.3.8. Preferred resource requirements The preferred requirements for each cluster virtual machine are: Virtual Machine Operating System vCPU Virtual RAM Storage Bootstrap RHCOS 4 16 GB 120 GB Control plane RHCOS 8 16 GB 120 GB Compute RHCOS 6 8 GB 120 GB 4.3.9. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. Additional resources Recommended host practices for IBM Z & IBM(R) LinuxONE environments 4.3.10. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 4.3.10.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 4.3.10.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Note The RHEL KVM host must be configured to use bridged networking in libvirt or MacVTap to connect the network to the virtual machines. The virtual machines must have access to the network, which is attached to the RHEL KVM host. Virtual Networks, for example network address translation (NAT), within KVM are not a supported configuration. Table 4.2. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 4.3. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 4.4. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 4.3.11. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 4.5. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 4.3.11.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 4.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 4.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 4.3.12. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 4.6. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 4.7. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 4.3.12.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 4.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 4.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Choose to perform either a fast track installation of Red Hat Enterprise Linux CoreOS (RHCOS) or a full installation of Red Hat Enterprise Linux CoreOS (RHCOS). For the full installation, you must set up an HTTP or HTTPS server to provide Ignition files and install images to the cluster nodes. For the fast track installation an HTTP or HTTPS server is not required, however, a DHCP server is required. See sections "Fast-track installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines" and "Full installation: Creating Red Hat Enterprise Linux CoreOS (RHCOS) machines". Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 4.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 4.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on your provisioning machine. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 4.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 4.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Z 4.9.1. Sample install-config.yaml file for IBM Z You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not available on your OpenShift Container Platform nodes, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether on your OpenShift Container Platform nodes or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Z infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 4.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a minimal three node cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. Note The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 4.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.8. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.9. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 4.10. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.11. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 4.12. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.13. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 4.14. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program runs on s390x only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 4.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Z infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) as Red Hat Enterprise Linux (RHEL) guest virtual machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. You can perform a fast-track installation of RHCOS that uses a prepackaged QEMU copy-on-write (QCOW2) disk image. Alternatively, you can perform a full installation on a new QCOW2 disk image. To add further security to your system, you can optionally install RHCOS using IBM Secure Execution before proceeding to the fast-track installation. 4.12.1. Installing RHCOS using IBM Secure Execution Before you install RHCOS using IBM Secure Execution, you must prepare the underlying infrastructure. Prerequisites IBM z15 or later, or IBM(R) LinuxONE III or later. Red Hat Enterprise Linux (RHEL) 8 or later. You have a bootstrap Ignition file. The file is not protected, enabling others to view and edit it. You have verified that the boot image has not been altered after installation. You must run all your nodes as IBM Secure Execution guests. Procedure Prepare your RHEL KVM host to support IBM Secure Execution. By default, KVM hosts do not support guests in IBM Secure Execution mode. To support guests in IBM Secure Execution mode, KVM hosts must boot in LPAR mode with the kernel parameter specification prot_virt=1 . To enable prot_virt=1 on RHEL 8, follow these steps: Navigate to /boot/loader/entries/ to modify your bootloader configuration file *.conf . Add the kernel command line parameter prot_virt=1 . Run the zipl command and reboot your system. KVM hosts that successfully start with support for IBM Secure Execution for Linux issue the following kernel message: prot_virt: Reserving <amount>MB as ultravisor base storage. To verify that the KVM host now supports IBM Secure Execution, run the following command: # cat /sys/firmware/uv/prot_virt_host Example output 1 The value of this attribute is 1 for Linux instances that detect their environment as consistent with that of a secure host. For other instances, the value is 0. Add your host keys to the KVM guest via Ignition. During the first boot, RHCOS looks for your host keys to re-encrypt itself with them. RHCOS searches for files starting with ibm-z-hostkey- in the /etc/se-hostkeys directory. All host keys, for each machine the cluster is running on, must be loaded into the directory by the administrator. After first boot, you cannot run the VM on any other machines. Note You need to prepare your Ignition file on a safe system. For example, another IBM Secure Execution guest. For example: { "ignition": { "version": "3.0.0" }, "storage": { "files": [ { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 }, { "path": "/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt", "contents": { "source": "data:;base64,<base64 encoded hostkey document>" }, "mode": 420 } ] } } ``` Note You can add as many host keys as required if you want your node to be able to run on multiple IBM Z machines. To generate the Base64 encoded string, run the following command: base64 <your-hostkey>.crt Compared to guests not running IBM Secure Execution, the first boot of the machine is longer because the entire image is encrypted with a randomly generated LUKS passphrase before the Ignition phase. Add Ignition protection To protect the secrets that are stored in the Ignition config file from being read or even modified, you must encrypt the Ignition config file. Note To achieve the desired security, Ignition logging and local login are disabled by default when running IBM Secure Execution. Fetch the public GPG key for the secex-qemu.qcow2 image and encrypt the Ignition config with the key by running the following command: gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign Note Before starting the VM, replace serial=ignition with serial=ignition_crypted when mounting the Ignition file. When Ignition runs on the first boot, and the decryption is successful, you will see an output like the following example: Example output [ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup... [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor. If the decryption fails, you will see an output like the following example: Example output Starting coreos-ignition-s...reOS Ignition User Config Setup... [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key "Secure Execution (secex) 38.20230323.dev.0" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key Follow the fast-track installation procedure to install nodes using the IBM Secure Exection QCOW image. Additional resources Introducing IBM Secure Execution for Linux Linux as an IBM Secure Execution host or guest 4.12.2. Configuring NBDE with static IP in an IBM Z or IBM(R) LinuxONE environment Enabling NBDE disk encryption in an IBM Z or IBM(R) LinuxONE environment requires additional steps, which are described in detail in this section. Prerequisites You have set up the External Tang Server. See Network-bound disk encryption for instructions. You have installed the butane utility. You have reviewed the instructions for how to create machine configs with Butane. Procedure Create Butane configuration files for the control plane and compute nodes. The following example of a Butane configuration for a control plane node creates a file named master-storage.bu for disk encryption: variant: openshift version: 4.13.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2 1 The cipher option is only required if FIPS mode is enabled. Omit the entry if FIPS is disabled. 2 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 has not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . Create a customized initramfs file to boot the machine, by running the following command: USD coreos-installer pxe customize \ /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img \ --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append \ ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none \ --dest-karg-append nameserver=<nameserver-ip> \ --dest-karg-append rd.neednet=1 -o \ /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img Note Before first boot, you must customize the initramfs for each node in the cluster, and add PXE kernel parameters. Create a parameter file that includes ignition.platform.id=metal and ignition.firstboot . Example kernel parameter file for the control plane machine: rd.neednet=1 \ console=ttysclp0 \ ignition.firstboot ignition.platform.id=metal \ coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img \ coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign \ ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 \ zfcp.allow_lun_scan=0 \ rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 \ rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000 Note Write all options in the parameter file as a single line and make sure you have no newline characters. Additional resources Creating machine configs with Butane 4.12.3. Fast-track installation by using a prepackaged QCOW2 disk image Complete the following steps to create the machines in a fast-track installation of Red Hat Enterprise Linux CoreOS (RHCOS), importing a prepackaged Red Hat Enterprise Linux CoreOS (RHCOS) QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. A DHCP server that provides IP addresses. Procedure Obtain the RHEL QEMU copy-on-write (QCOW2) disk image file from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. Download the QCOW2 disk image and Ignition files to a common directory on the RHEL KVM host. For example: /var/lib/libvirt/images Note The Ignition files are generated by the OpenShift Container Platform installer. Create a new disk image with the QCOW2 disk image backing file for each KVM guest node. USD qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size} Create the new KVM guest nodes using the Ignition file and the new disk image. USD virt-install --noautoconsole \ --connect qemu:///system \ --name {vn_name} \ --memory {memory} \ --vcpus {vcpus} \ --disk {disk} \ --import \ --network network={network},mac={mac} \ --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 1 1 If IBM Secure Execution is enabled, replace serial=ignition with serial=ignition_crypted . 4.12.4. Full installation on a new QCOW2 disk image Complete the following steps to create the machines in a full installation on a new QEMU copy-on-write (QCOW2) disk image. Prerequisites At least one LPAR running on RHEL 8.6 or later with KVM, referred to as RHEL KVM host in this procedure. The KVM/QEMU hypervisor is installed on the RHEL KVM host. A domain name server (DNS) that can perform hostname and reverse lookup for the nodes. An HTTP or HTTPS server is set up. Procedure Obtain the RHEL kernel, initramfs, and rootfs files from the Product Downloads page on the Red Hat Customer Portal or from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate RHCOS QCOW2 image described in the following procedure. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel: rhcos-<version>-live-kernel-<architecture> initramfs: rhcos-<version>-live-initramfs.<architecture>.img rootfs: rhcos-<version>-live-rootfs.<architecture>.img Move the downloaded RHEL live kernel, initramfs, and rootfs as well as the Ignition files to an HTTP or HTTPS server before you launch virt-install . Note The Ignition files are generated by the OpenShift Container Platform installer. Create the new KVM guest nodes using the RHEL kernel, initramfs, and Ignition files, the new disk image, and adjusted parm line arguments. For --location , specify the location of the kernel/initrd on the HTTP or HTTPS server. For coreos.inst.ignition_url= , specify the Ignition file for the machine role. Use bootstrap.ign , master.ign , or worker.ign . Only HTTP and HTTPS protocols are supported. For coreos.live.rootfs_url= , specify the matching rootfs artifact for the kernel and initramfs you are booting. Only HTTP and HTTPS protocols are supported. USD virt-install \ --connect qemu:///system \ --name {vn_name} \ --vcpus {vcpus} \ --memory {memory_mb} \ --disk {vn_name}.qcow2,size={image_size| default(10,true)} \ --network network={virt_network_parm} \ --boot hd \ --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} \ --extra-args "rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}" \ --noautoconsole \ --wait 4.12.5. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 4.12.5.1. Networking options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking on your RHCOS nodes for ISO installations. The examples describe how to use the ip= and nameserver= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= and nameserver= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page. The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 4.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 4.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 4.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 4.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 4.16.1.1. Configuring registry storage for IBM Z As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Z. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 4.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 4.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 4.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service How to generate SOSREPORT within OpenShift4 nodes without SSH . 4.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". β”œβ”€β”€ auth β”‚ β”œβ”€β”€ kubeadmin-password β”‚ └── kubeconfig β”œβ”€β”€ bootstrap.ign β”œβ”€β”€ master.ign β”œβ”€β”€ metadata.json └── worker.ign", "prot_virt: Reserving <amount>MB as ultravisor base storage.", "cat /sys/firmware/uv/prot_virt_host", "1", "{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```", "base64 <your-hostkey>.crt", "gpg --recipient-file /path/to/ignition.gpg.pub --yes --output /path/to/config.ign.gpg --verbose --armor --encrypt /path/to/config.ign", "[ 2.801433] systemd[1]: Starting coreos-ignition-setup-user.service - CoreOS Ignition User Config Setup [ 2.803959] coreos-secex-ignition-decrypt[731]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.808874] coreos-secex-ignition-decrypt[740]: gpg: encrypted with rsa4096 key, ID <key_name>, created <yyyy-mm-dd> [ OK ] Finished coreos-secex-igni...S Secex Ignition Config Decryptor.", "Starting coreos-ignition-s...reOS Ignition User Config Setup [ 2.863675] coreos-secex-ignition-decrypt[729]: gpg: key <key_name>: public key \"Secure Execution (secex) 38.20230323.dev.0\" imported [ 2.869178] coreos-secex-ignition-decrypt[738]: gpg: encrypted with RSA key, ID <key_name> [ 2.870347] coreos-secex-ignition-decrypt[738]: gpg: public key decryption failed: No secret key [ 2.870371] coreos-secex-ignition-decrypt[738]: gpg: decryption failed: No secret key", "variant: openshift version: 4.13.0 metadata: name: master-storage labels: machineconfiguration.openshift.io/role: master storage: luks: - clevis: tang: - thumbprint: QcPr_NHFJammnRCA3fFMVdNBwjs url: http://clevis.example.com:7500 options: 1 - --cipher - aes-cbc-essiv:sha256 device: /dev/disk/by-partlabel/root label: luks-root name: root wipe_volume: true filesystems: - device: /dev/mapper/root format: xfs label: root wipe_filesystem: true openshift: fips: true 2", "coreos-installer pxe customize /root/rhcos-bootfiles/rhcos-<release>-live-initramfs.s390x.img --dest-device /dev/disk/by-id/scsi-<serial-number> --dest-karg-append ip=<ip-address>::<gateway-ip>:<subnet-mask>::<network-device>:none --dest-karg-append nameserver=<nameserver-ip> --dest-karg-append rd.neednet=1 -o /root/rhcos-bootfiles/<Node-name>-initramfs.s390x.img", "rd.neednet=1 console=ttysclp0 ignition.firstboot ignition.platform.id=metal coreos.live.rootfs_url=http://10.19.17.25/redhat/ocp/rhcos-413.86.202302201445-0/rhcos-413.86.202302201445-0-live-rootfs.s390x.img coreos.inst.ignition_url=http://bastion.ocp-cluster1.example.com:8080/ignition/master.ign ip=10.19.17.2::10.19.17.1:255.255.255.0::enbdd0:none nameserver=10.19.17.1 zfcp.allow_lun_scan=0 rd.znet=qeth,0.0.bdd0,0.0.bdd1,0.0.bdd2,layer2=1 rd.zfcp=0.0.5677,0x600606680g7f0056,0x034F000000000000", "qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}", "virt-install --noautoconsole --connect qemu:///system --name {vn_name} --memory {memory} --vcpus {vcpus} --disk {disk} --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional 1", "virt-install --connect qemu:///system --name {vn_name} --vcpus {vcpus} --memory {memory_mb} --disk {vn_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst.install_dev=/dev/vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_z_and_ibm_linuxone/installing-ibm-z-kvm