title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_google_cloud/providing-feedback-on-red-hat-documentation_gcp
Chapter 2. Requirements
Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Red Hat certified hardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core x86_64 CPU. A quad core x86_64 CPU or multiple dual core x86_64 CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. You can access virtual machine consoles using the SPICE, VNC, or RDP (Windows only) protocols. You can install the QXLDOD graphical driver in the guest operating system to improve the functionality of SPICE. SPICE currently supports a maximum resolution of 2560x1600 pixels. Client Operating System SPICE Support Supported QXLDOD drivers are available on Red Hat Enterprise Linux 7.2 and later, and Windows 10. Note SPICE may work with Windows 8 or 8.1 using QXLDOD drivers, but it is neither certified nor tested. 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 8.6. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Find a certified solution . For more information on the requirements and limitations that apply to guests see Red Hat Enterprise Linux Technology Capabilities and Limits and Supported Limits for Red Hat Virtualization . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere SandyBridge IvyBridge Haswell Broadwell Skylake Client Skylake Server Cascadelake Server IBM POWER8 POWER9 For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For example: Intel Cascadelake Server Family Secure Intel Cascadelake Server Family The Secure CPU type contains the latest updates. For details, see BZ# 1731395 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. Procedure At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. For cluster levels 4.2 to 4.5, the maximum supported RAM per VM in Red Hat Virtualization Host is 6 TB. For cluster levels 4.6 to 4.7, the maximum supported RAM per VM in Red Hat Virtualization Host is 16 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, use the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 5 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB /var/tmp - 10 GB swap - 1 GB. See What is the recommended swap size for Red Hat platforms? for details. Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 64 GiB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 10 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Each host should have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. All PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 8 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Select a vGPU type and the number of instances that you would like to use with this virtual machine using the Manage vGPU dialog in the Administration Portal Host Devices tab of the virtual machine. vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking requirements 2.3.1. General requirements Red Hat Virtualization requires IPv6 to remain enabled on the physical or virtual machine running the Manager. Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Network range for self-hosted engine deployment The self-hosted engine deployment process temporarily uses a /24 network address under 192.168 . It defaults to 192.168.222.0/24 , and if this address is in use, it tries other /24 addresses under 192.168 until it finds one that is not in use. If it does not find an unused network address in this range, deployment fails. When installing the self-hosted engine using the command line, you can set the deployment script to use an alternate /24 network range with the option --ansible-extra-vars=he_ipv4_subnet_prefix= PREFIX , where PREFIX is the prefix for the default range. For example: # hosted-engine --deploy --ansible-extra-vars=he_ipv4_subnet_prefix=192.168.222 Note You can only set another range by installing Red Hat Virtualization as a self-hosted engine using the command line. 2.3.3. Firewall Requirements for DNS, NTP, and IPMI Fencing The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Use DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. 2.3.4. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.3. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager ( ovirt-imageio service) Required for communication with the ovirt-imageo service. Yes M8 6642 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see RHV: How to customize the Host's firewall rules? . Note A diagram of these firewall requirements is available at Red Hat Virtualization: Firewall Requirements Diagram . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager ovirt-imageio service Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ovirt-imageo service. Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No H14 123 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NTP Server NTP requests from ports above 1023 to port 123, and responses. This port is required and open by default. H15 4500 TCP, UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H16 500 UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H17 - AH, ESP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.6. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled . 2.3.7. Maximum Transmission Unit Requirements The recommended Maximum Transmission Units (MTU) setting for Hosts during deployment is 1500. It is possible to update this setting after the environment is set up to a different MTU. For more information on changing the MTU setting, see How to change the Hosted Engine VM network MTU .
[ "grep -E 'svm|vmx' /proc/cpuinfo | grep nx", "hosted-engine --deploy --ansible-extra-vars=he_ipv4_subnet_prefix=192.168.222" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/planning_and_prerequisites_guide/rhv_requirements
13.2.10. SSSD and Identity Providers (Domains)
13.2.10. SSSD and Identity Providers (Domains) SSSD recognizes domains , which are entries within the SSSD configuration file associated with different, external data sources. Domains are a combination of an identity provider (for user information) and, optionally, other providers such as authentication (for authentication requests) and for other operations, such as password changes. (The identity provider can also be used for all operations, if all operations are performed within a single domain or server.) SSSD works with different LDAP identity providers (including OpenLDAP, Red Hat Directory Server, and Microsoft Active Directory) and can use native LDAP authentication, Kerberos authentication, or provider-specific authentication protocols (such as Active Directory). A domain configuration defines the identity provider , the authentication provider , and any specific configuration to access the information in those providers. There are several types of identity and authentication providers: LDAP, for general LDAP servers Active Directory (an extension of the LDAP provider type) Identity Management (an extension of the LDAP provider type) Local, for the local SSSD database Proxy Kerberos (authentication provider only) The identity and authentication providers can be configured in different combinations in the domain entry. The possible combinations are listed in Table 13.6, "Identity Store and Authentication Type Combinations" . Table 13.6. Identity Store and Authentication Type Combinations Identification Provider Authentication Provider Identity Management (LDAP) Identity Management (LDAP) Active Directory (LDAP) Active Directory (LDAP) Active Directory (LDAP) Kerberos LDAP LDAP LDAP Kerberos proxy LDAP proxy Kerberos proxy proxy Along with the domain entry itself, the domain name must be added to the list of domains that SSSD will query. For example: global attributes are available to any type of domain, such as cache and time out settings. Each identity and authentication provider has its own set of required and optional configuration parameters. Table 13.7. General [domain] Configuration Parameters Parameter Value Format Description id_provider string Specifies the data back end to use for this domain. The supported identity back ends are: ldap ipa (Identity Management in Red Hat Enterprise Linux) ad (Microsoft Active Directory) proxy, for a legacy NSS provider, such as nss_nis . Using a proxy ID provider also requires specifying the legacy NSS library to load to start successfully, set in the proxy_lib_name option. local, the SSSD internal local provider auth_provider string Sets the authentication provider used for the domain. The default value for this option is the value of id_provider . The supported authentication providers are ldap, ipa, ad, krb5 (Kerberos), proxy, and none. min_id,max_id integer Optional. Specifies the UID and GID range for the domain. If a domain contains entries that are outside that range, they are ignored. The default value for min_id is 1 ; the default value for max_id is 0 , which is unlimited. Important The default min_id value is the same for all types of identity provider. If LDAP directories are using UID numbers that start at one, it could cause conflicts with users in the local /etc/passwd file. To avoid these conflicts, set min_id to 1000 or higher as possible. cache_credentials Boolean Optional. Specifies whether to store user credentials in the local SSSD domain database cache. The default value for this parameter is false . Set this value to true for domains other than the LOCAL domain to enable offline authentication. entry_cache_timeout integer Optional. Specifies how long, in seconds, SSSD should cache positive cache hits. A positive cache hit is a successful query. use_fully_qualified_names Boolean Optional. Specifies whether requests to this domain require fully qualified domain names. If set to true , all requests to this domain must use fully qualified domain names. It also means that the output from the request displays the fully-qualified name. Restricting requests to fully qualified user names allows SSSD to differentiate between domains with users with conflicting user names. If use_fully_qualified_names is set to false , it is possible to use the fully-qualified name in the requests, but only the simplified version is displayed in the output. SSSD can only parse names based on the domain name, not the realm name. The same name can be used for both domains and realms, however.
[ "[sssd] domains = LOCAL, Name [domain/ Name ] id_provider = type auth_provider = type provider_specific = value global = value" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/Configuring_Domains
Chapter 8. jaas
Chapter 8. jaas 8.1. jaas:cancel 8.1.1. Description Cancel the modification of a JAAS realm 8.1.2. Syntax jaas:cancel [options] 8.1.3. Options Name Description --help Display this help message 8.2. jaas:group-add 8.2.1. Description Make a user part of a group 8.2.2. Syntax jaas:group-add [options] username group 8.2.3. Arguments Name Description username Username group Group 8.2.4. Options Name Description --help Display this help message 8.3. jaas:group-create 8.3.1. Description create a group in a realm 8.3.2. Syntax jaas:group-create [options] group 8.3.3. Arguments Name Description group Group 8.3.4. Options Name Description --help Display this help message 8.4. jaas:group-delete 8.4.1. Description Remove a user from a group 8.4.2. Syntax jaas:group-delete [options] username group 8.4.3. Arguments Name Description username Username group Group 8.4.4. Options Name Description --help Display this help message 8.5. jaas:group-list 8.5.1. Description List groups in a realm 8.5.2. Syntax jaas:group-list [options] 8.5.3. Options Name Description --help Display this help message 8.6. jaas:group-role-add 8.6.1. Description Add a role to a group 8.6.2. Syntax jaas:group-role-add [options] group role 8.6.3. Arguments Name Description group Group role Role 8.6.4. Options Name Description --help Display this help message 8.7. jaas:group-role-delete 8.7.1. Description Remove a role from a group 8.7.2. Syntax jaas:group-role-delete [options] group role 8.7.3. Arguments Name Description group Group role Role 8.7.4. Options Name Description --help Display this help message 8.8. jaas:pending-list 8.8.1. Description List the pending modification on the active JAAS Realm/Login Module 8.8.2. Syntax jaas:pending-list [options] 8.8.3. Options Name Description --help Display this help message 8.9. jaas:realm-list 8.9.1. Description List JAAS realms 8.9.2. Syntax jaas:realm-list [options] 8.9.3. Options Name Description --help Display this help message -h, --hidden Show hidden realms --no-format Disable table rendered output 8.10. jaas:realm-manage 8.10.1. Description Manage users and roles of a JAAS Realm 8.10.2. Syntax jaas:realm-manage [options] 8.10.3. Options Name Description -h, --hidden Manage hidden realms --help Display this help message --realm JAAS Realm -f, --force Force the management of this realm, even if another one was under management --index Realm Index --module JAAS Login Module Class Name 8.11. jaas:role-add 8.11.1. Description Add a role to a user 8.11.2. Syntax jaas:role-add [options] username role 8.11.3. Arguments Name Description username User Name role Role 8.11.4. Options Name Description --help Display this help message 8.12. jaas:role-delete 8.12.1. Description Delete a role from a user 8.12.2. Syntax jaas:role-delete [options] username role 8.12.3. Arguments Name Description username User Name role Role 8.12.4. Options Name Description --help Display this help message 8.13. jaas:su 8.13.1. Description Substitute user identity 8.13.2. Syntax jaas:su [options] [user] 8.13.3. Arguments Name Description user Name of the user to substitute (defaults to karaf) 8.13.4. Options Name Description --help Display this help message --realm 8.14. jaas:sudo 8.14.1. Description Execute a command as another user 8.14.2. Syntax jaas:sudo [options] [command] 8.14.3. Arguments Name Description command 8.14.4. Options Name Description --help Display this help message --realm --user 8.15. jaas:update 8.15.1. Description Apply pending modification on the edited JAAS Realm 8.15.2. Syntax jaas:update [options] 8.15.3. Options Name Description --help Display this help message 8.16. jaas:user-add 8.16.1. Description Add a user 8.16.2. Syntax jaas:user-add [options] username password 8.16.3. Arguments Name Description username User Name password Password 8.16.4. Options Name Description --help Display this help message 8.17. jaas:user-delete 8.17.1. Description Delete a user 8.17.2. Syntax jaas:user-delete [options] username 8.17.3. Arguments Name Description username User Name 8.17.4. Options Name Description --help Display this help message 8.18. jaas:user-list 8.18.1. Description List the users of the selected JAAS realm/login module 8.18.2. Syntax jaas:user-list [options] 8.18.3. Options Name Description --help Display this help message --no-format Disable table rendered output 8.19. jaas:whoami 8.19.1. Description List currently active principals according to JAAS. 8.19.2. Syntax jaas:whoami [options] 8.19.3. Options Name Description --help Display this help message --no-format Disable table rendered output. -g, --groups Show groups instead of user. -a, --all Show all JAAS principals regardless of type. -r, --roles Show roles instead of user.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/jaas
Chapter 7. Memory
Chapter 7. Memory This chapter outlines the memory management capabilities of Red Hat Enterprise Linux 7. Section 7.1, "Considerations" discusses memory related factors that affect performance. Section 7.2, "Monitoring and Diagnosing Performance Problems" teaches you how to use Red Hat Enterprise Linux 7 tools to diagnose performance problems related to memory utilization or configuration details. Section 7.5, "Configuring System Memory Capacity" and Section 7.3, "Configuring HugeTLB Huge Pages" discuss the tools and strategies you can use to solve memory related performance problems in Red Hat Enterprise Linux 7. 7.1. Considerations By default, Red Hat Enterprise Linux 7 is optimized for moderate workloads. If your application or workload requires a large amount of memory, changing the way your system manages virtual memory may improve the performance of your application. 7.1.1. Larger Page Size Physical memory is managed in chunks called pages. On most architectures supported by Red Hat Enterprise Linux 7, the default size of a memory page is 4 KB. This default page size has proved to be suitable for general-purpose operating systems, such as Red Hat Enterprise Linux 7, which support many different kinds of workloads. However, specific applications can benefit from using larger page sizes in certain cases. For example, an application that works with a large and relatively fixed data set of hundreds of megabytes or even dozens of gigabytes can have performance issues when using 4 KB pages. Such data sets can require hundreds of thousands of 4 KB pages, which can lead to overhead in the operating system and the CPU. Red Hat Enterprise Linux 7 enables the use of larger page sizes for applications working with big data sets. Using larger page sizes can improve the performance of such applications. Two different large page features are available in Red Hat Enterprise Linux 7: the HugeTLB feature, also called static huge pages in this guide, and the Transparent Huge Page feature. 7.1.2. Translation Lookaside Buffer Size Reading address mappings from the page table is time-consuming and resource-expensive, so CPUs are built with a cache for recently-used addresses: the Translation Lookaside Buffer (TLB). However, the default TLB can only cache a certain number of address mappings. If a requested address mapping is not in the TLB (that is, the TLB is missed ), the system still needs to read the page table to determine the physical to virtual address mapping. Because of the relationship between application memory requirements and the size of pages used to cache address mappings, applications with large memory requirements are more likely to suffer performance degradation from TLB misses than applications with minimal memory requirements. It is therefore important to avoid TLB misses wherever possible. Both HugeTLB and Transparent Huge Page features allow applications to use pages larger than 4 KB. This allows addresses stored in the TLB to reference more memory, which reduces TLB misses and improves application performance.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/chap-Red_Hat_Enterprise_Linux-Performance_Tuning_Guide-Memory
Chapter 2. Installation
Chapter 2. Installation This chapter describes in detail how to get access to the content set, install Red Hat Software Collections 3.8 on the system, and rebuild Red Hat Software Collections. 2.1. Getting Access to Red Hat Software Collections The Red Hat Software Collections content set is available to customers with Red Hat Enterprise Linux subscriptions listed in the Knowledgebase article How to use Red Hat Software Collections (RHSCL) or Red Hat Developer Toolset (DTS)? . For information on how to register your system with Red Hat Subscription Management (RHSM), see Using Red Hat Subscription Management . For detailed instructions on how to enable Red Hat Software Collections using RHSM, see Section 2.1.1, "Using Red Hat Subscription Management" . Since Red Hat Software Collections 2.2, the Red Hat Software Collections and Red Hat Developer Toolset content is available also in the ISO format at https://access.redhat.com/downloads , specifically for Server and Workstation . Note that packages that require the Optional repository, which are listed in Section 2.1.2, "Packages from the Optional Repository" , cannot be installed from the ISO image. Note Packages that require the Optional repository cannot be installed from the ISO image. A list of packages that require enabling of the Optional repository is provided in Section 2.1.2, "Packages from the Optional Repository" . Beta content is unavailable in the ISO format. 2.1.1. Using Red Hat Subscription Management If your system is registered with Red Hat Subscription Management, complete the following steps to attach the subscription that provides access to the repository for Red Hat Software Collections and enable the repository: Display a list of all subscriptions that are available for your system and determine the pool ID of a subscription that provides Red Hat Software Collections. To do so, type the following at a shell prompt as root : subscription-manager list --available For each available subscription, this command displays its name, unique identifier, expiration date, and other details related to it. The pool ID is listed on a line beginning with Pool Id . Attach the appropriate subscription to your system by running the following command as root : subscription-manager attach --pool= pool_id Replace pool_id with the pool ID you determined in the step. To verify the list of subscriptions your system has currently attached, type as root : subscription-manager list --consumed Display the list of available Yum list repositories to retrieve repository metadata and determine the exact name of the Red Hat Software Collections repositories. As root , type: subscription-manager repos --list Or alternatively, run yum repolist all for a brief list. The repository names depend on the specific version of Red Hat Enterprise Linux you are using and are in the following format: Replace variant with the Red Hat Enterprise Linux system variant, that is, server or workstation . Note that Red Hat Software Collections is supported neither on the Client nor on the ComputeNode variant. Enable the appropriate repository by running the following command as root : subscription-manager repos --enable repository Once the subscription is attached to the system, you can install Red Hat Software Collections as described in Section 2.2, "Installing Red Hat Software Collections" . For more information on how to register your system using Red Hat Subscription Management and associate it with subscriptions, see Using Red Hat Subscription Management . Note Subscription through RHN is no longer available. 2.1.2. Packages from the Optional Repository Some of the Red Hat Software Collections packages require the Optional repository to be enabled in order to complete the full installation of these packages. For detailed instructions on how to subscribe your system to this repository, see the relevant Knowledgebase article How to access Optional and Supplementary channels, and -devel packages using Red Hat Subscription Management (RHSM)? . Packages from Software Collections for Red Hat Enterprise Linux that require the Optional repository to be enabled are listed in the tables below. Note that packages from the Optional repository are unsupported. For details, see the Knowledgebase article Support policy of the optional and supplementary channels in Red Hat Enterprise Linux . Table 2.1. Packages That Require Enabling of the Optional Repository in Red Hat Enterprise Linux 7 Package from a Software Collection Required Package from the Optional Repository devtoolset-12-build scl-utils-build devtoolset-12-dyninst-testsuite glibc-static devtoolset-12-elfutils-debuginfod bsdtar devtoolset-12-gcc-plugin-devel libmpc-devel devtoolset-12-gdb source-highlight devtoolset-11-build scl-utils-build devtoolset-11-dyninst-testsuite glibc-static devtoolset-11-elfutils-debuginfod bsdtar devtoolset-11-gcc-plugin-devel libmpc-devel devtoolset-11-gdb source-highlight devtoolset-10-build scl-utils-build devtoolset-10-dyninst-testsuite glibc-static devtoolset-10-elfutils-debuginfod bsdtar devtoolset-10-gcc-plugin-devel libmpc-devel devtoolset-10-gdb source-highlight httpd24-mod_ldap apr-util-ldap rh-git227-git-cvs cvsps rh-git227-git-svn perl-Git-SVN rh-git227-perl-Git-SVN subversion-perl rh-php73-php-devel pcre2-devel rh-php73-php-pspell aspell rh-python38-python-devel scl-utils-build 2.2. Installing Red Hat Software Collections Red Hat Software Collections is distributed as a collection of RPM packages that can be installed, updated, and uninstalled by using the standard package management tools included in Red Hat Enterprise Linux. Note that a valid subscription is required to install Red Hat Software Collections on your system. For detailed instructions on how to associate your system with an appropriate subscription and get access to Red Hat Software Collections, see Section 2.1, "Getting Access to Red Hat Software Collections" . Use of Red Hat Software Collections 3.8 requires the removal of any earlier pre-release versions, including Beta releases. If you have installed any version of Red Hat Software Collections 3.8, uninstall it from your system and install the new version as described in the Section 2.3, "Uninstalling Red Hat Software Collections" and Section 2.2.1, "Installing Individual Software Collections" sections.> The in-place upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 is not supported by Red Hat Software Collections. As a consequence, the installed Software Collections might not work correctly after the upgrade. If you want to upgrade from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7, it is strongly recommended to remove all Red Hat Software Collections packages, perform the in-place upgrade, update the Red Hat Software Collections repository, and install the Software Collections packages again. It is advisable to back up all data before upgrading. 2.2.1. Installing Individual Software Collections To install any of the Software Collections that are listed in Table 1.1, "Red Hat Software Collections Components" , install the corresponding meta package by typing the following at a shell prompt as root : yum install software_collection ... Replace software_collection with a space-separated list of Software Collections you want to install. For example, to install rh-php73 and rh-mariadb105 , type as root : This installs the main meta package for the selected Software Collection and a set of required packages as its dependencies. For information on how to install additional packages such as additional modules, see Section 2.2.2, "Installing Optional Packages" . 2.2.2. Installing Optional Packages Each component of Red Hat Software Collections is distributed with a number of optional packages that are not installed by default. To list all packages that are part of a certain Software Collection but are not installed on your system, type the following at a shell prompt: yum list available software_collection -\* To install any of these optional packages, type as root : yum install package_name ... Replace package_name with a space-separated list of packages that you want to install. For example, to install the rh-perl530-perl-CPAN and rh-perl530-perl-Archive-Tar , type: 2.2.3. Installing Debugging Information To install debugging information for any of the Red Hat Software Collections packages, make sure that the yum-utils package is installed and type the following command as root : debuginfo-install package_name For example, to install debugging information for the rh-ruby27-ruby package, type: Note that you need to have access to the repository with these packages. If your system is registered with Red Hat Subscription Management, enable the rhel- variant -rhscl-6-debug-rpms or rhel- variant -rhscl-7-debug-rpms repository as described in Section 2.1.1, "Using Red Hat Subscription Management" . For more information on how to get access to debuginfo packages, see How can I download or install debuginfo packages for RHEL systems? . 2.3. Uninstalling Red Hat Software Collections To uninstall any of the Software Collections components, type the following at a shell prompt as root : yum remove software_collection \* Replace software_collection with the Software Collection component you want to uninstall. Note that uninstallation of the packages provided by Red Hat Software Collections does not affect the Red Hat Enterprise Linux system versions of these tools. 2.4. Rebuilding Red Hat Software Collections <collection>-build packages are not provided by default. If you wish to rebuild a collection and do not want or cannot use the rpmbuild --define 'scl foo' command, you first need to rebuild the metapackage, which provides the <collection>-build package. Note that existing collections should not be rebuilt with different content. To add new packages into an existing collection, you need to create a new collection containing the new packages and make it dependent on packages from the original collection. The original collection has to be used without changes. For detailed information on building Software Collections, refer to the Red Hat Software Collections Packaging Guide .
[ "rhel- variant -rhscl-7-rpms rhel- variant -rhscl-7-debug-rpms rhel- variant -rhscl-7-source-rpms", "~]# yum install rh-php73 rh-mariadb105", "~]# yum install rh-perl530-perl-CPAN rh-perl530-perl-Archive-Tar", "~]# debuginfo-install rh-ruby27-ruby" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/chap-Installation
Chapter 1. Introduction
Chapter 1. Introduction An Ansible Playbook is a blueprint for automation tasks, which are actions executed with limited manual effort across an inventory of solutions. Playbooks tell Ansible what to do on which devices. Instead of manually applying the same action to hundreds or thousands of similar technologies across IT environments, executing a playbook automatically completes the same action for the specified type of inventory-such as a set of routers. Playbooks are regularly used to automate IT infrastructure-such as operating systems and Kubernetes platforms-networks, security systems, and code repositories like GitHub. You can use playbooks to program applications, services, server nodes, and other devices, without the manual overhead of creating everything from scratch. Playbooks, and the conditions, variables, and tasks within them, can be saved, shared, or reused indefinitely. This makes it easier for you to codify operational knowledge and ensure that the same actions are performed consistently. 1.1. How do Ansible Playbooks work? Ansible Playbooks are lists of tasks that automatically execute for your specified inventory or groups of hosts. One or more Ansible tasks can be combined to make a play, that is, an ordered grouping of tasks mapped to specific hosts. Tasks are executed in the order in which they are written. A playbook can include one or more plays. A playbook is composed of one or more plays in an ordered list. The terms playbook and play are sports analogies. Each play executes part of the overall goal of the playbook, running one or more tasks. Each task calls an Ansible module. Playbook A list of plays that define the order in which Ansible performs operations, from top to bottom, to achieve an overall goal. Play An ordered list of tasks that maps to managed nodes in an inventory. Task A reference to a single module that defines the operations that Ansible performs. Roles Roles are a way to make code in playbooks reusable by putting the functionality into "libraries" that can then be used in any playbook as needed. Module A unit of code or binary that Ansible runs on managed nodes. Ansible modules are grouped in collections with a Fully Qualified Collection Name (FQCN) for each module. Tasks are executed by modules, each of which performs a specific task in a playbook. A module contains metadata that determines when and where a task is executed, as well as which user executes it. There are thousands of Ansible modules that perform all kinds of IT tasks, such as: Cloud management User management Networking Security Configuration management Communication 1.2. How do I use Ansible Playbooks? Ansible uses the YAML syntax. YAML is a human-readable language that enables you to create playbooks without having to learn a complicated coding language. For more information on YAML, see YAML Syntax and consider installing an add-on for your text editor, see Other Tools and Programs to help you write clean YAML syntax in your playbooks. There are two ways of using Ansible Playbooks: From the command line interface (CLI) Using Red Hat Ansible Automation Platform's push-button deployments. 1.2.1. From the CLI After installing the open source Ansible project or Red Hat Ansible Automation Platform by using USD sudo dnf install ansible in the Red Hat Enterprise Linux CLI, you can use the ansible-playbook command to run Ansible Playbooks. 1.2.2. From within the platform The Red Hat Ansible Automation Platform user interface offers push-button Ansible Playbook deployments that can be used as part of larger jobs or job templates. These deployments come with additional safeguards that are particularly helpful to users who are newer to IT automation, or those without as much experience working in the CLI. 1.3. Starting automation with Ansible Get started with Ansible by creating an automation project, building an inventory, and creating a Hello World playbook. Prerequisites The Ansible package must be installed. Procedure Create a project folder on your filesystem. mkdir ansible_quickstart cd ansible_quickstart Using a single directory structure makes it easier to add to source control, and reuse and share automation content. 1.4. Building an inventory Inventories organize managed nodes in centralized files that provide Ansible with system information and network locations. Using an inventory file, Ansible can manage a large number of hosts with a single command. To complete the following steps, you need the IP address or fully qualified domain name (FQDN) of at least one host system. For demonstration purposes, the host could be running locally in a container or a virtual machine. You must also ensure that your public SSH key is added to the authorized_keys file on each host. Use the following procedure to build an inventory. Procedure Create a file named inventory.ini in the ansible_quickstart directory that you created. Add a new [myhosts] group to the inventory.ini file and specify the IP address or fully qualified domain name (FQDN) of each host system. [myhosts] 192.0.2.50 192.0.2.51 192.0.2.52 Verify your inventory, using: ansible-inventory -i inventory.ini --list Ping the myhosts group in your inventory, using: ansible myhosts -m ping -i inventory.ini Pass the -u option with the Ansible command if the username is different on the control node and the managed node(s). 192.0.2.50 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } 192.0.2.51 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } 192.0.2.52 | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python3" }, "changed": false, "ping": "pong" } You have successfully built an inventory. 1.4.1. Inventories in INI or YAML format You can create inventories in either INI files or in YAML. In most cases, such as the preceding example, INI files are straightforward and easy to read for a small number of managed nodes. Creating an inventory in YAML format becomes a sensible option as the number of managed nodes increases. For example, the following is an equivalent of the inventory.ini that declares unique names for managed nodes and uses the ansible_host field: myhosts: hosts: my_host_01: ansible_host: 192.0.2.50 my_host_02: ansible_host: 192.0.2.51 my_host_03: ansible_host: 192.0.2.52 1.4.2. Tips for building inventories Ensure that group names are meaningful and unique. Group names are also case sensitive. Do not use spaces, hyphens, or preceding numbers (use floor_19, not 19th_floor) in group names. Group hosts in your inventory logically according to their What, Where, and When: What: Group hosts according to the topology, for example: db, web, leaf, spine. Where: Group hosts by geographic location, for example: datacenter, region, floor, building. When: Group hosts by stage, for example: development, test, staging, production. 1.4.3. Use metagroups Create a metagroup that organizes multiple groups in your inventory with the following syntax: metagroupname: children: The following inventory illustrates a basic structure for a data center. This example inventory contains a network metagroup that includes all network devices and a datacenter metagroup that includes the network group and all webservers. leafs: hosts: leaf01: ansible_host: 192.0.2.100 leaf02: ansible_host: 192.0.2.110 spines: hosts: spine01: ansible_host: 192.0.2.120 spine02: ansible_host: 192.0.2.130 network: children: leafs: spines: webservers: hosts: webserver01: ansible_host: 192.0.2.140 webserver02: ansible_host: 192.0.2.150 datacenter: children: network: webservers: 1.5. Create variables Variables set values for managed nodes, such as the IP address, FQDN, operating system, and SSH user, so you do not need to pass them when running Ansible commands. Variables can apply to specific hosts. webservers: hosts: webserver01: ansible_host: 192.0.2.140 http_port: 80 webserver02: ansible_host: 192.0.2.150 http_port: 443 Variables can also apply to all hosts in a group. webservers: hosts: webserver01: ansible_host: 192.0.2.140 http_port: 80 webserver02: ansible_host: 192.0.2.150 http_port: 443 vars: ansible_user: my_server_user For more information about inventories and Ansible inventory variables, see About the Installer Inventory file and Inventory file variables . 1.6. Creating your first playbook Use the following procedure to create a playbook that pings your hosts and prints a "Hello world" message. Procedure Create a file named playbook.yaml in your ansible_quickstart directory, with the following content: - name: My first play hosts: myhosts tasks: - name: Ping my hosts ansible.builtin.ping: - name: Print message ansible.builtin.debug: msg: Hello world Run your playbook, using the following command: ansible-playbook -i inventory.ini playbook.yaml Ansible returns the following output: PLAY [My first play] **************************************************************************** TASK [Gathering Facts] ************************************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Ping my hosts] **************************************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Print message] **************************************************************************** ok: [192.0.2.50] => { "msg": "Hello world" } ok: [192.0.2.51] => { "msg": "Hello world" } ok: [192.0.2.52] => { "msg": "Hello world" } PLAY RECAP ************************************************************************************** 192.0.2.50: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.51: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.52: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 In this output you can see: The names that you give the play and each task. Always use descriptive names that make it easy to verify and troubleshoot playbooks. The Gather Facts task runs implicitly. By default Ansible gathers information about your inventory that it can use in the playbook. The status of each task. Each task has a status of ok which means it ran successfully. The play recap that summarizes results of all tasks in the playbook per host. In this example, there are three tasks so ok=3 indicates that each task ran successfully.
[ "sudo dnf install ansible", "mkdir ansible_quickstart cd ansible_quickstart", "[myhosts] 192.0.2.50 192.0.2.51 192.0.2.52", "ansible-inventory -i inventory.ini --list", "ansible myhosts -m ping -i inventory.ini", "192.0.2.50 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python3\" }, \"changed\": false, \"ping\": \"pong\" } 192.0.2.51 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python3\" }, \"changed\": false, \"ping\": \"pong\" } 192.0.2.52 | SUCCESS => { \"ansible_facts\": { \"discovered_interpreter_python\": \"/usr/bin/python3\" }, \"changed\": false, \"ping\": \"pong\" }", "myhosts: hosts: my_host_01: ansible_host: 192.0.2.50 my_host_02: ansible_host: 192.0.2.51 my_host_03: ansible_host: 192.0.2.52", "metagroupname: children:", "leafs: hosts: leaf01: ansible_host: 192.0.2.100 leaf02: ansible_host: 192.0.2.110 spines: hosts: spine01: ansible_host: 192.0.2.120 spine02: ansible_host: 192.0.2.130 network: children: leafs: spines: webservers: hosts: webserver01: ansible_host: 192.0.2.140 webserver02: ansible_host: 192.0.2.150 datacenter: children: network: webservers:", "webservers: hosts: webserver01: ansible_host: 192.0.2.140 http_port: 80 webserver02: ansible_host: 192.0.2.150 http_port: 443", "webservers: hosts: webserver01: ansible_host: 192.0.2.140 http_port: 80 webserver02: ansible_host: 192.0.2.150 http_port: 443 vars: ansible_user: my_server_user", "- name: My first play hosts: myhosts tasks: - name: Ping my hosts ansible.builtin.ping: - name: Print message ansible.builtin.debug: msg: Hello world", "ansible-playbook -i inventory.ini playbook.yaml", "PLAY [My first play] **************************************************************************** TASK [Gathering Facts] ************************************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Ping my hosts] **************************************************************************** ok: [192.0.2.50] ok: [192.0.2.51] ok: [192.0.2.52] TASK [Print message] **************************************************************************** ok: [192.0.2.50] => { \"msg\": \"Hello world\" } ok: [192.0.2.51] => { \"msg\": \"Hello world\" } ok: [192.0.2.52] => { \"msg\": \"Hello world\" } PLAY RECAP ************************************************************************************** 192.0.2.50: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.51: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 192.0.2.52: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_ansible_playbooks/assembly-intro-to-playbooks
Chapter 40. OProfile
Chapter 40. OProfile OProfile is a low overhead, system-wide performance monitoring tool. It uses the performance monitoring hardware on the processor to retrieve information about the kernel and executables on the system, such as when memory is referenced, the number of L2 cache requests, and the number of hardware interrupts received. On a Red Hat Enterprise Linux system, the oprofile RPM package must be installed to use this tool. Many processors include dedicated performance monitoring hardware. This hardware makes it possible to detect when certain events happen (such as the requested data not being in cache). The hardware normally takes the form of one or more counters that are incremented each time an event takes place. When the counter value, essentially rolls over, an interrupt is generated, making it possible to control the amount of detail (and therefore, overhead) produced by performance monitoring. OProfile uses this hardware (or a timer-based substitute in cases where performance monitoring hardware is not present) to collect samples of performance-related data each time a counter generates an interrupt. These samples are periodically written out to disk; later, the data contained in these samples can then be used to generate reports on system-level and application-level performance. OProfile is a useful tool, but be aware of some limitations when using it: Use of shared libraries - Samples for code in shared libraries are not attributed to the particular application unless the --separate=library option is used. Performance monitoring samples are inexact - When a performance monitoring register triggers a sample, the interrupt handling is not precise like a divide by zero exception. Due to the out-of-order execution of instructions by the processor, the sample may be recorded on a nearby instruction. opreport does not associate samples for inline functions' properly - opreport uses a simple address range mechanism to determine which function an address is in. Inline function samples are not attributed to the inline function but rather to the function the inline function was inserted into. OProfile accumulates data from multiple runs - OProfile is a system-wide profiler and expects processes to start up and shut down multiple times. Thus, samples from multiple runs accumulate. Use the command opcontrol --reset to clear out the samples from runs. Non-CPU-limited performance problems - OProfile is oriented to finding problems with CPU-limited processes. OProfile does not identify processes that are asleep because they are waiting on locks or for some other event to occur (for example an I/O device to finish an operation). 40.1. Overview of Tools Table 40.1, "OProfile Commands" provides a brief overview of the tools provided with the oprofile package. Table 40.1. OProfile Commands Command Description op_help Displays available events for the system's processor along with a brief description of each. op_import Converts sample database files from a foreign binary format to the native format for the system. Only use this option when analyzing a sample database from a different architecture. opannotate Creates annotated source for an executable if the application was compiled with debugging symbols. Refer to Section 40.5.3, "Using opannotate " for details. opcontrol Configures what data is collected. Refer to Section 40.2, "Configuring OProfile" for details. opreport Retrieves profile data. Refer to Section 40.5.1, "Using opreport " for details. oprofiled Runs as a daemon to periodically write sample data to disk.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/OProfile
Chapter 7. Red Hat Certificate System 10.0 on Red Hat Enterprise Linux 8.2
Chapter 7. Red Hat Certificate System 10.0 on Red Hat Enterprise Linux 8.2 This section describes significant changes in Red Hat Certificate System 10.0 on RHEL 8.2, such as highlighted updates and new features, important bug fixes, and current known issues users should be aware of. 7.1. Updates and new features in CS 10.0 This section documents new features and important updates in Red Hat Certificate System 10.0: Certificate System packages rebased to version 10.8.3 The pki-core , redhat-pki , redhat-pki-theme , and pki-console packages have been upgraded to upstream version 10.8.3, which provides a number of bug fixes and enhancements over the version. Updates and new features in the pki-core package: Checking the overall health of your public key infrastructure is now available as a Technology Preview The pki-healthcheck tool provides several checks that help you find and report error conditions that may impact the health of your public key infrastructure (PKI) environment. Note Note that this feature is offered as a technology preview, provides early access to upcoming product functionality, and is not yet fully supported under subscription agreements. The pki subsystem-cert-find and pki subsystem-cert-show commands now show the serial number of certificates With this enhancement, the pki subsystem-cert-find and pki subsystem-cert-show commands in Certificate System show the serial number of certificates in their output. The serial number is an important piece of information and often required by multiple other commands. As a result, identifying the serial number of a certificate is now easier. The pki user and pki group commands have been deprecated in Certificate System With this update, the new pki <subsystem>-user and pki <subsystem>-group commands replace the pki user and pki group commands in Certificate System. The replaced commands still work, but they display a message that the command is deprecated and refer to the new commands. Certificate System now supports offline renewal of system certificates With this enhancement, administrators can use the offline renewal feature to renew system certificates configured in Certificate System. When a system certificate expires, Certificate System fails to start. As a result of the enhancement, administrators no longer need workarounds to replace an expired system certificate. Certificate System can now create CSRs with SKI extension for external CA signing With this enhancement, Certificate System supports creating a certificate signing request (CSR) with the Subject Key Identifier (SKI) extension for external certificate authority (CA) signing. Certain CAs require this extension either with a particular value or derived from the CA public key. As a result, administrators can now use the pki_req_ski parameter in the configuration file passed to the pkispawn utility to create a CSR with SKI extension. 7.2. Technology Previews ACME support in RHCS available as Technology Preview Server certificate issuance via an Automated Certificate Management Environment (ACME) responder is available for Red Hat Certificate System (RHCS). The ACME responder supports the ACME v2 protocol (RFC 8555). Previously, users had to use the Certificate Authority (CA)'s proprietary certificate signing request (CSR) submission routines. The routines sometimes required certificate authority (CA) agents to manually review the requests and issue the certificates. The RHCS ACME responder now provides a standard mechanism for automatic server certificate issuance and life cycle management without involving CA agents. The feature allows the RHCS CA to integrate with existing certificate issuance infrastructure to target public CAs for deployment and internal CAs for development. Note that this Technology Preview only includes an ACME server support. No ACME client is shipped as part of this release. Additionally, this ACME preview does not retain issuance data or handle user registration. Be aware that future Red Hat Enterprise Linux updates can potentially break ACME installations. For more information, see the IETF definition of ACME . Note Note that this feature is offered as a technology preview, provides early access to upcoming product functionality, and is not yet fully supported under subscription agreements. 7.3. Bug fixes in CS 10.0 This part describes bugs fixed in Red Hat Certificate System 10.0 that have a significant impact on users. Bug fixes in the pki-core package: The pkidestroy utility now picks the correct instance Previously, the pkidestroy --force command executed on a half-removed instance picked the pki-tomcat instance by default, regardless of the instance name specified with the -i instance option. As a consequence, this removed the pki-tomcat instance instead of the intended instance, and the --remove-logs option did not remove the intended instance's logs. pkidestroy now applies the right instance name, removing only the intended instance's leftovers. The Nuxwdog service no longer fails to start the PKI server in HSM environments Previously, due to bugs, the keyutils package was not installed as a dependency of the pki-core package. Additionally, the Nuxwdog watchdog service failed to start the public key infrastructure (PKI) server in environments that use a hardware security module (HSM). These problems have been fixed. As a result, the required keyutils package is now installed automatically as a dependency, and Nuxwdog starts the PKI server as expected in environments with HSM. Certificate System no longer logs SetAllPropertiesRule operation warnings when the service starts Previously, Certificate System logged warnings on the SetAllPropertiesRule operation in the /var/log/messages log file when the service started. The problem has been fixed, and the mentioned warnings are no longer logged. Certificate System now supports rotating debug logs Previously, Certificate System used a custom logging framework, which did not support log rotation. As a consequence, debug logs such as /var/log/pki/ instance_name /ca/debug grew indefinitely. With this update, Certificate System uses the java.logging.util framework, which supports log rotation. As a result, you can configure log rotation in the /var/lib/pki/ instance_name /conf/logging.properties file. The Certificate System KRA client parses Key Request responses correctly Certificate System switched to a new JSON library. As a consequence, serialization for certain objects differed, and the Python key recovery authority (KRA) client failed to parse Key Request responses. The client has been modified to support responses using both the old and the new JSON library. As a result, the Python KRA client parses Key Request responses correctly. 7.4. Known issues in CS 10.0 This part describes known problems users should be aware of in Red Hat Certificate System 10.0, and, if applicable, workarounds. TPS requires adding anonymous bind ACI access In versions, the anonymous bind ACI was allowed by default, but it is now disabled in LDAP. Consequently, this prevents enrolling or formatting TPS smart cards. To work around this problem until a fix, you need to add the anonymous bind ACI in Directory Server manually: Known issues in the pki-core package: Using the cert-fix utility with the --agent-uid pkidbuser option breaks Certificate System Using the cert-fix utility with the --agent-uid pkidbuser option corrupts the LDAP configuration of Certificate System. As a consequence, Certificate System might become unstable and manual steps are required to recover the system.
[ "ldapmodify -D \"cn=Directory Manager\" -W -x -p 3389 -h hostname -x <<EOF dn: dc=example,dc=org changetype: modify add: aci aci: (targetattr!=\"userPassword || aci\")(version 3.0; acl \"Enable anonymous access\"; allow (read, search, compare) userdn=\"ldap:///anyone\";) EOF" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/release_notes/assembly_red-hat-certificate-system-10-0_assembly_red-hat-certificate-system-10-1
Chapter 4. Debezium connector for JDBC (Developer Preview)
Chapter 4. Debezium connector for JDBC (Developer Preview) The Debezium JDBC connector is a Kafka Connect sink connector implementation that can consume events from multiple source topics, and then write those events to a relational database by using a JDBC driver. This connector supports a wide variety of database dialects, including Db2, MySQL, Oracle, PostgreSQL, and SQL Server. Important The Debezium JDBC connector is Developer Preview software only. Developer Preview software is not supported by Red Hat in any way and is not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software is subject to change or removal at any time, and has received limited testing. For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope . 4.1. How the Debezium JDBC connector works The Debezium JDBC connector is a Kafka Connect sink connector, and therefore requires the Kafka Connect runtime. The connector periodically polls the Kafka topics that it subscribes to, consumes events from those topics, and then writes the events to the configured relational database. The connector supports idempotent write operations by using upsert semantics and basic schema evolution. The Debezium JDBC connector provides the following features: Section 4.1.1, "Description of how the Debezium JDBC connector consumes complex change events" Section 4.1.2, "Description of Debezium JDBC connector at-least-once delivery" Section 4.1.3, "Description of Debezium JDBC use of multiple tasks" Section 4.1.4, "Description of Debezium JDBC connector data and column type mappings" Section 4.1.5, "Description of how the Debezium JDBC connector handles primary keys in source events" Section 4.1.6, "Configuring the Debezium JDBC connector to delete rows when consuming DELETE or tombstone events" Section 4.1.7, "Enabling the connector to perform idempotent writes" Section 4.1.8, "Schema evolution modes for the Debezium JDBC connector" Section 4.1.9, "Specifying options to define the letter case of destination table and column names" 4.1.1. Description of how the Debezium JDBC connector consumes complex change events By default, Debezium source connectors produce complex, hierarchical change events. When Debezium connectors are used with other JDBC sink connector implementations, you might need to apply the ExtractNewRecordState single message transformation (SMT) to flatten the payload of change events, so that they can be consumed by the sink implementation. If you run the Debezium JDBC sink connector, it's not necessary to deploy the SMT, because the Debezium sink connector can consume native Debezium change events directly, without the use of a transformation. When the JDBC sink connector consumes a complex change event from a Debezium source connector, it extracts the values from the after section of the original insert or update event. When a delete event is consumed by the sink connector, no part of the event's payload is consulted. Important The Debezium JDBC sink connector is not designed to read from schema change topics. If your source connector is configured to capture schema changes, in the JDBC connector configuration, set the topics or topics.regex properties so that the connector does not consume from schema change topics. 4.1.2. Description of Debezium JDBC connector at-least-once delivery The Debezium JDBC sink connector guarantees that events that is consumes from Kafka topics are processed at least once. 4.1.3. Description of Debezium JDBC use of multiple tasks You can run the Debezium JDBC sink connector across multiple Kafka Connect tasks. To run the connector across multiple tasks, set the tasks.max configuration property to the number of tasks that you want the connector to use. The Kafka Connect runtime starts the specified number of tasks, and runs one instance of the connector per task. Multiple tasks can improve performance by reading and processing changes from multiple source topics in parallel. 4.1.4. Description of Debezium JDBC connector data and column type mappings To enable the Debezium JDBC sink connector to correctly map the data type from an inbound message field to an outbound message field, the connector requires information about the data type of each field that is present in the source event. The connector supports a wide range of column type mappings across different database dialects. To correctly convert the destination column type from the type metadata in an event field, the connector applies the data type mappings that are defined for the source database. You can enhance the way that the connector resolves data types for a column by setting the column.propagate.source.type or datatype.propagate.source.type options in the source connector configuration. When you enable these options, Debezium includes extra parameter metadata, which assists the JDBC sink connector in more accurately resolving the data type of destination columns. For the Debezium JDBC sink connector to process events from a Kafka topic, the Kafka topic message key, when present, must be a primitive data type or a Struct . In addition, the payload of the source message must be a Struct that has either a flattened structure with no nested struct types, or a nested struct layout that conforms to Debezium's complex, hierarchical structure. If the structure of the events in the Kafka topic do not adhere to these rules, you must implement a custom single message transformation to convert the structure of the source events into a usable format. 4.1.5. Description of how the Debezium JDBC connector handles primary keys in source events By default, the Debezium JDBC sink connector does not transform any of the fields in the source event into the primary key for the event. Unfortunately, the lack of a stable primary key can complicate event processing, depending on your business requirements, or when the sink connector uses upsert semantics. To define a consistent primary key, you can configure the connector to use one of the primary key modes described in the following table: Mode Description none No primary key fields are specified when creating the table. kafka The primary key consists of the following three columns: __connect_topic __connect_partition __connect_offset The values for these columns are sourced from the coordinates of the Kafka event. record_key The primary key is composed of the Kafka event's key. If the primary key is a primitive type, specify the name of the column to be used by setting the primary.key.fields property. If the primary key is a struct type, the fields in the struct are mapped as columns of the primary key. You can use the primary.key.fields property to restrict the primary key to a subset of columns. record_value The primary key is composed of the Kafka event's value. Because the value of a Kafka event is always a Struct , by default, all of the fields in the value become columns of the primary key. To use a subset of fields in the primary key, set the primary.key.fields property to specify a comma-separated list of fields in the value from which you want to derive the primary key columns. Important Some database dialects might throw an exception if you set the primary.key.mode to kafka and set schema.evolution to basic . This exception occurs when a dialect maps a STRING data type mapping to a variable length string data type such as TEXT or CLOB , and the dialect does not allow primary key columns to have unbounded lengths. To avoid this problem, apply the following settings in your environment: Do not set schema.evolution to basic . Create the database table and primary key mappings in advance. 4.1.6. Configuring the Debezium JDBC connector to delete rows when consuming DELETE or tombstone events The Debezium JDBC sink connector can delete rows in the destination database when a DELETE or tombstone event is consumed. By default, the JDBC sink connector does not enable delete mode. If you want to the connector to remove rows, you must explicitly set delete.enabled=true in the connector configuration. To use this mode you must also set primary.key.fields to a value other than none . The preceding configuration is necessary, because deletes are executed based on the primary key mapping, so if a destination table has no primary key mapping, the connector is unable to delete rows. 4.1.7. Enabling the connector to perform idempotent writes The Debezium JDBC sink connector can perform idempotent writes, enabling it to replay the same records repeatedly and not change the final database state. To enable the connector to perform idempotent writes, you must be explicitly set the insert.mode for the connector to upsert . An upsert operation is applied as either an update or an insert , depending on whether the specified primary key already exists. If the primary key value already exists, the operation updates values in the row. If the specified primary key value doesn't exist, an insert adds a new row. Each database dialect handles idempotent writes differently, because there is no SQL standard for upsert operations. The following table shows the upsert DML syntax for the database dialects that Debezium supports: Dialect Upsert Syntax Db2 MERGE ... MySQL INSERT ... ON DUPLICATE KEY UPDATE ... Oracle MERGE ... PostgreSQL INSERT ... ON CONFLICT ... DO UPDATE SET ... SQL Server MERGE ... 4.1.8. Schema evolution modes for the Debezium JDBC connector You can use the following schema evolution modes with the Debezium JDBC sink connector: Mode Description none The connector does not perform any DDL schema evolution. basic The connector automatically detects fields that are in the event payload but that do not exist in the destination table. The connector alters the destination table to add the new fields. When schema.evolution is set to basic , the connector automatically creates or alters the destination database table according to the structure of the incoming event. When an event is received from a topic for the first time, and the destination table does not yet exist, the Debezium JDBC sink connector uses the event's key, or the schema structure of the record to resolve the column structure of the table. If schema evolution is enabled, the connector prepares and executes a CREATE TABLE SQL statement before it applies the DML event to the destination table. When the Debezium JDBC connector receives an event from a topic, if the schema structure of the record differs from the schema structure of the destination table, the connector uses either the event's key or its schema structure to identify which columns are new, and must be added to the database table. If schema evolution is enabled, the connector prepares and executes an ALTER TABLE SQL statement before it applies the DML event to the destination table. Because changing column data types, dropping columns, and adjusting primary keys can be considered dangerous operations, the connector is prohibited from performing these operations. The schema of each field determines whether a column is NULL or NOT NULL . The schema also defines the default values for each column. If the connector attempts to create a table with a nullability setting or a default value that don't want, you must either create the table manually, ahead of time, or adjust the schema of the associated field before the sink connector processes the event. To adjust nullability settings or default values, you can introduce a custom single message transformation that applies changes in the pipeline, or modifies the column state defined in the source database. A field's data type is resolved based on a predefined set of mappings. For more information, see Section 4.2, "How the Debezium JDBC connector maps data types" . Important When you introduce new fields to the event structure of tables that already exist in the destination database, you must define the new fields as optional, or the fields must have a default value specified in the database schema. If you want a field to be removed from the destination table, use one of the following options: Remove the field manually. Drop the column. Assign a default value to the field. Define the field a nullable. 4.1.9. Specifying options to define the letter case of destination table and column names The Debezium JDBC sink connector consumes Kafka messages by constructing either DDL (schema changes) or DML (data changes) SQL statements that are executed on the destination database. By default, the connector uses the names of the source topic and the event fields as the basis for the table and column names in the destination table. The constructed SQL does not automatically delimit identifiers with quotes to preserve the case of the original strings. As a result, by default, the text case of table or column names in the destination database depends entirely on how the database handles name strings when the case is not specified. For example, if the destination database dialect is Oracle and the event's topic is orders , the destination table will be created as ORDERS because Oracle defaults to upper-case names when the name is not quoted. Similarly, if the destination database dialect is PostgreSQL and the event's topic is ORDERS , the destination table will be created as orders because PostgreSQL defaults to lower-case names when the name is not quoted. To explicitly preserve the case of the table and field names that are present in a Kafka event, in the connector configuration, set the value of the quote.identifiers property to true . When this options is set, when an incoming event is for a topic called orders , and the destination database dialect is Oracle, the connector creates a table with the name orders , because the constructed SQL defines the name of the table as "orders" . Enabling quoting results in the same behavior when the connector creates column names. 4.2. How the Debezium JDBC connector maps data types The Debezium JDBC sink connector resolves a column's data type by using a logical or primitive type-mapping system. Primitive types include values such as integers, floating points, Booleans, strings, and bytes. Typically, these types are represented with a specific Kafka Connect Schema type code only. Logical data types are more often complex types, including values such as Struct -based types that have a fixed set of field names and schema, or values that are represented with a specific encoding, such as number of days since epoch. The following examples show representative structures of primitive and logical data types: Primitive field schema Logical field schema [ "schema": { "type": "INT64", "name": "org.apache.kafka.connect.data.Date" } ] Kafka Connect is not the only source for these complex, logical types. In fact, Debezium source connectors generate change events that have fields with similar logical types to represent a variety of different data types, including but not limited to, timestamps, dates, and even JSON data. The Debezium JDBC sink connector uses these primitive and logical types to resolve a column's type to a JDBC SQL code, which represents a column's type. These JDBC SQL codes are then used by the underlying Hibernate persistence framework to resolve the column's type to a logical data type for the dialect in use. The following tables illustrate the primitive and logical mappings between Kafka Connect and JDBC SQL types, and between Debezium and JDBC SQL types. The actual final column type varies with for each database type. Table 4.1, "Mappings between Kafka Connect Primitives and Column Data Types" Table 4.2, "Mappings between Kafka Connect Logical Types and Column Data Types" Table 4.3, "Mappings between Debezium Logical Types and Column Data Types" Table 4.4, "Mappings between Debezium dialect-specific Logical Types and Column Data Types" Table 4.1. Mappings between Kafka Connect Primitives and Column Data Types Primitive Type JDBC SQL Type INT8 Types.TINYINT INT16 Types.SMALLINT INT32 Types.INTEGER INT64 Types.BIGINT FLOAT32 Types.FLOAT FLOAT64 Types.DOUBLE BOOLEAN Types.BOOLEAN STRING Types.CHAR, Types.NCHAR, Types.VARCHAR, Types.NVARCHAR BYTES Types.VARBINARY Table 4.2. Mappings between Kafka Connect Logical Types and Column Data Types Logical Type JDBC SQL Type org.apache.kafka.connect.data.Decimal Types.DECIMAL org.apache.kafka.connect.data.Date Types.DATE org.apache.kafka.connect.data.Time Types.TIMESTAMP org.apache.kafka.connect.data.Timestamp Types.TIMESTAMP Table 4.3. Mappings between Debezium Logical Types and Column Data Types Logical Type JDBC SQL Type io.debezium.time.Date Types.DATE io.debezium.time.Time Types.TIMESTAMP io.debezium.time.MicroTime Types.TIMESTAMP io.debezium.time.NanoTime Types.TIMESTAMP io.debezium.time.ZonedTime Types.TIME_WITH_TIMEZONE io.debezium.time.Timestamp Types.TIMESTAMP io.debezium.time.MicroTimestamp Types.TIMESTAMP io.debezium.time.NanoTimestamp Types.TIMESTAMP io.debezium.time.ZonedTimestamp Types.TIMESTAMP_WITH_TIMEZONE io.debezium.data.VariableScaleDecimal Types.DOUBLE Important If the database does not support time or timestamps with time zones, the mapping resolves to its equivalent without timezones. Table 4.4. Mappings between Debezium dialect-specific Logical Types and Column Data Types Logical Type MySQL SQL Type PostgreSQL SQL Type SQL Server SQL Type io.debezium.data.Bits bit(n) bit(n) or bit varying varbinary(n) io.debezium.data.Enum enum Types.VARCHAR n/a io.debezium.data.Json json json n/a io.debezium.data.EnumSet set n/a n/a io.debezium.time.Year year(n) n/a n/a io.debezium.time.MicroDuration n/a interval n/a io.debezium.data.Ltree n/a ltree n/a io.debezium.data.Uuid n/a uuid n/a io.debezium.data.Xml n/a xml xml In addition to the primitive and logical mappings above, if the source of the change events is a Debezium source connector, the resolution of the column type, along with its length, precision, and scale, can be further influenced by enabling column or data type propagation. To enforce propagation, one of the following properties must be set in the source connector configuration: column.propagate.source.type datatype.propagate.source.type The Debezium JDBC sink connector applies the values with the higher precedence. For example, let's say the following field schema is included in a change event: Debezium change event field schema with column or data type propagation enabled { "schema": { "type": "INT8", "parameters": { "__debezium.source.column.type": "TINYINT", "__debezium.source.column.length": "1" } } } In the preceding example, if no schema parameters are set, the Debezium JDBC sink connector maps this field to a column type of Types.SMALLINT . Types.SMALLINT can have different logical database types, depending on the database dialect. For MySQL, the column type in the example converts to a TINYINT column type with no specified length. If column or data type propagation is enabled for the source connector, the Debezium JDBC sink connector uses the mapping information to refine the data type mapping process and create a column with the type TINYINT(1) . Note Typically, the effect of using column or data type propagation is much greater when the same type of database is used for both the source and sink database. 4.3. Deployment of Debezium JDBC connectors To deploy a Debezium JDBC connector, you install the Debezium JDBC connector archive, configure the connector, and start the connector by adding its configuration to Kafka Connect. Prerequisites Apache ZooKeeper , Apache Kafka , and Kafka Connect are installed. A destination database is installed and configured to accept JDBC connections. Procedure Download the Debezium JDBC connector plug-in archive . Extract the files into your Kafka Connect environment. Optionally download the JDBC driver from Maven Central and extract the downloaded driver file to the directory that contains the JDBC sink connector JAR file. Note Drivers for Oracle and Db2 are not included with the JDBC sink connector. You must download the drivers and install them manually. Add the driver JAR files to the path where the JDBC sink connector has been installed. Make sure that the path where you install the JDBC sink connector is part of the Kafka Connect plugin.path . Restart the Kafka Connect process to pick up the new JAR files. 4.3.1. Debezium JDBC connector configuration Typically, you register a Debezium JDBC connector by submitting a JSON request that specifies the configuration properties for the connector. The following example shows a JSON request for registering an instance of the Debezium JDBC sink connector that consumes events from a topic called orders with the most common configuration settings: Example: Debezium JDBC connector configuration { "name": "jdbc-connector", 1 "config": { "connector.class": "io.debezium.connector.jdbc.JdbcSinkConnector", 2 "tasks.max": "1", 3 "connection.url": "jdbc:postgresql://localhost/db", 4 "connection.username": "pguser", 5 "connection.password": "pgpassword", 6 "insert.mode": "upsert", 7 "delete.enabled": "true", 8 "primary.key.mode": "record_key", 9 "schema.evolution": "basic", 10 "database.time_zone": "UTC" 11 } } 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 The name that is assigned to the connector when you register it with Kafka Connect service. 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 The name of the JDBC sink connector class. 3 3 3 3 3 3 3 3 3 3 3 3 The maximum number of tasks to create for this connector. 4 4 4 4 4 4 4 4 4 4 4 4 The JDBC URL that the connector uses to connect to the sink database that it writes to. 5 5 5 5 5 5 5 5 5 5 5 The name of the database user used for authentication. 6 6 6 6 6 6 6 The password of the database user used for authentication. 7 7 7 7 7 7 The insert.mode that the connector uses. 8 8 8 8 8 8 Enables the deletion of records in the database. For more information, see the delete.enabled configuration property. 9 9 9 9 9 9 Specifies the method used to resolve primary key columns. For more information, see the primary.key.mode configuration property. 10 10 10 10 10 10 Enables the connector to evolve the destination database's schema. For more information, see the schema.evolution configuration property. 11 11 11 11 Specifies the timezone used when writing temporal field types. For a complete list of configuration properties that you can set for the Debezium JDBC connector, see JDBC connector properties . You can send this configuration with a POST command to a running Kafka Connect service. The service records the configuration and starts a sink connector task(s) that performs the following operations: Connects to the database. Consumes events from subscribed Kafka topics. Writes the events to the configured database. 4.4. Descriptions of Debezium JDBC connector configuration properties The Debezium JDBC sink connector has several configuration properties that you can use to achieve the connector behavior that meets your needs. Many properties have default values. Information about the properties is organized as follows: JDBC connector generic properties JDBC connector connection properties JDBC connector runtime properties JDBC connector extendable properties Table 4.5. Generic properties Property Default Description name No default Unique name for the connector. A failure results if you attempt to reuse this name when registering a connector. This property is required by all Kafka Connect connectors. connector.class No default The name of the Java class for the connector. For the Debezium JDBC connector, specify the value io.debezium.connector.jdbc.JdbcSinkConnector . tasks.max 1 Maximum number of tasks to use for this connector. topics No default List of topics to consume, separated by commas. Do not use this property in combination with the topics.regex property. topics.regex No default A regular expression that specifies the topics to consume. Internally, the regular expression is compiled to a java.util.regex.Pattern . Do not use this property in combination with the topics property. Table 4.6. JDBC connector connection properties Property Default Description connection.url No default The JDBC connection URL used to connect to the database. connection.username No default The name of the database user account that the connector uses to connect to the database. connection.password No default The password that the connector uses to connect to the database. connection.pool.min_size 5 Specifies the minimum number of connections in the pool. connection.pool.min_size 32 Specifies the maximum number of concurrent connections that the pool maintains. connection.pool.acquire_increment 32 Specifies the number of connections that the connector attempts to acquire if the connection pool exceeds its maximum size. connection.pool.timeout 1800 Specifies the number of seconds that an unused connection is kept before it is discarded. Table 4.7. JDBC connector runtime properties Property Default Description database.time_zone UTC Specifies the timezone used when inserting JDBC temporal values. delete.enabled false Specifies whether the connector processes DELETE or tombstone events and removes the corresponding row from the database. Use of this option requires that you set the primary.key.mode to record.key . insert.mode insert Specifies the strategy used to insert events into the database. The following options are available: insert Specifies that all events should construct INSERT -based SQL statements. Use this option only when no primary key is used, or when you can be certain that no updates can occur to rows with existing primary key values. update Specifies that all events should construct UPDATE -based SQL statements. Use this option only when you can be certain that the connector receives only events that apply to existing rows. upsert Specifies that the connector adds events to the table using upsert semantics. That is, if the primary key does not exist, the connector performs an INSERT operation, and if the key does exist, the connector performs an UPDATE operation. When idempotent writes are required, the connector should be configured to use this option. primary.key.mode none Specifies how the connector resolves the primary key columns from the event. none Specifies that no primary key columns are created. kafka Specifies that the connector uses Kafka coordinates as the primary key columns. The key coordinates are defined from the topic name, partition, and offset of the event, and are mapped to columns with the following names: __connect_topic __connect_partition __connect_offset record_key Specifies that the primary key columns are sourced from the event's record key. If the record key is a primitive type, the primary.key.fields property is required to specify the name of the primary key column. If the record key is a struct type, the primary.key.fields property is optional, and can be used to specify a subset of columns from the event's key as the table's primary key. record_value Specifies that the primary key columns is sourced from the event's value. You can set the primary.key.fields property to define the primary key as a subset of fields from the event's value; otherwise all fields are used by default. primary.key.fields No default Either the name of the primary key column or a comma-separated list of fields to derive the primary key from. When primary.key.mode is set to record_key and the event's key is a primitive type, it is expected that this property specifies the column name to be used for the key. When the primary.key.mode is set to record_key with a non-primitive key, or record_value , it is expected that this property specifies a comma-separated list of field names from either the key or value. If the primary.key.mode is set to record_key with a non-primitive key, or record_value , and this property is not specifies, the connector derives the primary key from all fields of either the record key or record value, depending on the specified mode. quote.identifiers false Specifies whether generated SQL statements use quotation marks to delimit table and column names. See the Section 4.1.9, "Specifying options to define the letter case of destination table and column names" section for more details. schema.evolution none Specifies how the connector evolves the destination table schemas. For more information, see Section 4.1.8, "Schema evolution modes for the Debezium JDBC connector" . The following options are available: none Specifies that the connector does not evolve the destination schema. basic Specifies that basic evolution occurs. The connector adds missing columns to the table by comparing the incoming event's record schema to the database table structure. table.name.format USD{topic} Specifies a string that determines how the destination table name is formatted, based on the topic name of the event. The placeholder, USD{topic} , is replaced by the topic name. Table 4.8. JDBC connector extendable properties Property Default Description column.naming.strategy i.d.c.j.n.DefaultColumnNamingStrategy Specifies the fully-qualified class name of a ColumnNamingStrategy implementation that the connector uses to resolve column names from event field names. By default, the connector uses the field name as the column name. table.naming.strategy i.d.c.j.n.DefaultTableNamingStrategy Specifies the fully-qualified class name of a TableNamingStrategy implementation that the connector uses to resolve table names from incoming event topic names. The default behavior is to: Replace the USD{topic} placeholder in the table.name.format configuration property with the event's topic. Sanitize the table name by replacing dots ( . ) with underscores ( _ ). 4.5. JDBC connector frequently asked questions Is the ExtractNewRecordState single message transformation required? No, that is actually one of the differentiating factors of the Debezium JDBC connector from other implementations. While the connector is capable of ingesting flattened events like its competitors, it can also ingest Debezium's complex change event structure natively, without requiring any specific type of transformation. If a column's type is changed, or if a column is renamed or dropped, is this handled by schema evolution? No, the Debezium JDBC connector does not make any changes to existing columns. The schema evolution supported by the connector is quite basic. It simply compares the fields in the event structure to the table's column list, and then adds any fields that are not yet defined as columns in the table. If a column's type or default value change, the connector does not adjust them in the destination database. If a column is renamed, the old column is left as-is, and the connector appends a column with the new name to the table; however existing rows with data in the old column remain unchanged. These types of schema changes should be handled manually. If a column's type does not resolve to the type that I want, how can I enforce mapping to a different data type? The Debezium JDBC connector uses a sophisticated type system to resolve a column's data type. For details about how this type system resolves a specific field's schema definition to a JDBC type, see the Section 4.1.4, "Description of Debezium JDBC connector data and column type mappings" section. If you want to apply a different data type mapping, define the table manually to explicitly obtain the preferred column type. How do you specify a prefix or a suffix to the table name without changing the Kafka topic name? In order to add a prefix or a suffix to the destination table name, adjust the table.name.format connector configuration property to apply the prefix or suffix that you want. For example, to prefix all table names with jdbc_ , specify the table.name.format configuration property with a value of jdbc_USD{topic} . If the connector is subscribed to a topic called orders , the resulting table is created as jdbc_orders . Why are some columns automatically quoted, even though identifier quoting is not enabled? In some situations, specific column or table names might be explicitly quoted, even when quote.identifiers is not enabled. This is often necessary when the column or table name starts with or uses a specific convention that would otherwise be considered illegal syntax. For example, when the primary.key.mode is set to kafka , some databases only permit column names to begin with an underscore if the column's name is quoted. Quoting behavior is dialect-specific, and varies among different types of database.
[ "{ \"schema\": { \"type\": \"INT64\" } }", "[ \"schema\": { \"type\": \"INT64\", \"name\": \"org.apache.kafka.connect.data.Date\" } ]", "{ \"schema\": { \"type\": \"INT8\", \"parameters\": { \"__debezium.source.column.type\": \"TINYINT\", \"__debezium.source.column.length\": \"1\" } } }", "{ \"name\": \"jdbc-connector\", 1 \"config\": { \"connector.class\": \"io.debezium.connector.jdbc.JdbcSinkConnector\", 2 \"tasks.max\": \"1\", 3 \"connection.url\": \"jdbc:postgresql://localhost/db\", 4 \"connection.username\": \"pguser\", 5 \"connection.password\": \"pgpassword\", 6 \"insert.mode\": \"upsert\", 7 \"delete.enabled\": \"true\", 8 \"primary.key.mode\": \"record_key\", 9 \"schema.evolution\": \"basic\", 10 \"database.time_zone\": \"UTC\" 11 } }" ]
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/debezium_user_guide/debezium-connector-for-jdbc
Chapter 12. Provisioning virtual machines on OpenShift Virtualization
Chapter 12. Provisioning virtual machines on OpenShift Virtualization OpenShift Virtualization addresses the needs of development teams that have adopted or want to adopt Red Hat OpenShift Container Platform but possess existing virtual machine (VM) workloads that cannot be easily containerized. This technology provides a unified development platform where developers can build, modify, and deploy applications residing in application containers and VMs in a shared environment. These capabilities support rapid application modernization across the open hybrid cloud. With Satellite, you can create a compute resource for OpenShift Virtualization so that you can provision and manage virtual machines in OpenShift Container Platform using Satellite. Note that template provisioning is not supported for this release. Important The OpenShift Virtualization compute resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . Prerequisites You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . You must have cluster-admin permissions for the OpenShift Container Platform cluster. A Capsule Server managing a network on the OpenShift Container Platform cluster. Ensure that no other DHCP services run on this network to avoid conflicts with Capsule Server. For more information about network service configuration for Capsule Servers, see Configuring Networking in Provisioning hosts . Additional resources For a list of permissions a non-admin user requires to provision hosts, see Appendix E, Permissions required to provision hosts . 12.1. Adding an OpenShift Virtualization connection to Satellite Server Use this procedure to add OpenShift Virtualization as a compute resource in Satellite. Procedure Enter the following satellite-installer command to enable the OpenShift Virtualization plugin for Satellite: Obtain a token to use for HTTP and HTTPs authentication: Log in to the OpenShift Container Platform cluster and list the secrets that contain tokens: Obtain the token for your secret: Record the token to use later in this procedure. In the Satellite web UI, navigate to Infrastructure > Compute Resources , and click Create Compute Resource . In the Name field, enter a name for the new compute resource. From the Provider list, select OpenShift Virtualization . In the Description field, enter a description for the compute resource. In the Hostname field, enter the FQDN, hostname, or IP address of the OpenShift Container Platform cluster. In the API Port field, enter the port number that you want to use for provisioning requests from Satellite to OpenShift Virtualization. In the Namespace field, enter the user name of the OpenShift Container Platform cluster. In the Token field, enter the bearer token for HTTP and HTTPs authentication. Optional: In the X509 Certification Authorities field, enter a certificate to enable client certificate authentication for API server calls.
[ "satellite-installer --enable-foreman-plugin-kubevirt", "oc get secrets", "oc get secrets MY_SECRET -o jsonpath='{.data.token}' | base64 -d | xargs" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/provisioning_virtual_machines_kubevirt_kubevirt-provisioning
Chapter 13. Lifecyle bucket configuration in Multicloud Object Gateway
Chapter 13. Lifecyle bucket configuration in Multicloud Object Gateway Multicloud Object Gateway (MCG) lifecycle provides a way to reduce storage costs due to accumulated data objects. Deletion of expired objects is a simplified way that enables handling of unused data. Data expiration is a part of Amazon Web Services (AWS) lifecycle management and sets an expiration date for automatic deletion. The minimal time resolution of the lifecycle expiration is one day. For more information, see Expiring objects . AWS S3 API is used to configure lifecyle bucket in MCG. For information about the data bucket APIs and their support level, see Support of Multicloud Object Gateway data bucket APIs . There are a few limitations with the expiratation rule API for MCG in comaparison with AWS: ExpiredObjectDeleteMarker is accepted but it is not processed. No option to define specific non-current version's expiration conditions
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_hybrid_and_multicloud_resources/con_lifecycle-bucket-configuration-in-multicloud-object-gateway_rhodf
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/evaluating_amq_streams_on_openshift/making-open-source-more-inclusive
Chapter 6. Managing projects
Chapter 6. Managing projects As a cloud administrator, you can create and manage projects. A project is a pool of shared virtual resources, to which you can assign OpenStack users and groups. You can configure the quota of shared virtual resources in each project. You can create multiple projects with Red Hat OpenStack Platform that will not interfere with each other's permissions and resources. Users can be associated with more than one project. Each user must have a role assigned for each project to which they are assigned. 6.1. Creating a project Create a project, add members to the project and set resource limits for the project. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . Click Create Project . On the Project Information tab, enter a name and description for the project. The Enabled check box is selected by default. On the Project Members tab, add members to the project from the All Users list. On the Quotas tab, specify resource limits for the project. Click Create Project . 6.2. Editing a project You can edit a project to change its name or description, enable or temporarily disable it, or update the members in the project. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Edit Project . In the Edit Project window, you can update a project to change its name or description, and enable or temporarily disable the project. On the Project Members tab, add members to the project, or remove them as needed. Click Save . Note The Enabled check box is selected by default. To temporarily disable the project, clear the Enabled check box. To enable a disabled project, select the Enabled check box. 6.3. Deleting a project Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . Select the project that you want to delete. Click Delete Projects . The Confirm Delete Projects window is displayed. Click Delete Projects to confirm the action. The project is deleted and any user pairing is disassociated. 6.4. Updating project quotas Quotas are operational limits that you set for each project to optimize cloud resources. You can set quotas to prevent project resources from being exhausted without notification. You can enforce quotas at both the project and the project-user level. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Modify Quotas . In the Quota tab, modify project quotas as needed. Click Save . Note At present, nested quotas are not yet supported. As such, you must manage quotas individually against projects and subprojects. 6.5. Changing the active project Set a project as the active project so that you can use the dashboard to interact with objects in the project. To set a project as the active project, you must be a member of the project. It is also necessary for the user to be a member of more than one project to have the Set as Active Project option be enabled. You cannot set a disabled project as active, unless it is re-enabled. Log in to the Dashboard as a user with administrative privileges. Select Identity > Projects . In the project Actions column, click the arrow, and click Set as Active Project . Alternatively, as a non-admin user, in the project Actions column, click Set as Active Project which becomes the default action in the column. 6.6. Project hierarchies You can nest projects using multitenancy in the Identity service (keystone). Multitenancy allows subprojects to inherit role assignments from a parent project. 6.6.1. Creating hierarchical projects and sub-projects You can implement Hierarchical Multitenancy (HMT) using keystone domains and projects. First create a new domain and then create a project within that domain. You can then add subprojects to that project. You can also promote a user to administrator of a subproject by adding the user to the admin role for that subproject. Note The HMT structure used by keystone is not currently represented in the dashboard. Procedure Create a new keystone domain called corp : Create the parent project ( private-cloud ) within the corp domain: Create a subproject ( dev ) within the private-cloud parent project, while also specifying the corp domain: Create another subproject called qa : Note You can use the Identity API to view the project hierarchy. For more information, see https://developer.openstack.org/api-ref/identity/v3/index.html?expanded=show-project-details-detail 6.6.2. Configuring access to hierarchical projects By default, a newly-created project has no assigned roles. When you assign role permissions to the parent project, you can include the --inherited flag to instruct the subprojects to inherit the assigned permissions from the parent project. For example, a user with admin role access to the parent project also has admin access to the subprojects. Granting access to users View the existing permissions assigned to a project: View the existing roles: Grant the user account user1 access to the private-cloud project: Re-run this command using the --inherited flag. As a result, user1 also has access to the private-cloud subprojects, which have inherited the role assignment: Review the result of the permissions update: The user1 user has inherited access to the qa and dev projects. In addition, because the --inherited flag was applied to the parent project, user1 also receives access to any subprojects that are created later. Removing access from users Explicit and inherited permissions must be separately removed. Remove a user from an explicitly assigned role: Review the result of the change. Notice that the inherited permissions are still present: Remove the inherited permissions: Review the result of the change. The inherited permissions have been removed, and the resulting output is now empty: 6.6.3. Reseller project overview With the Reseller project, the goal is to have a hierarchy of domains; these domains will eventually allow you to consider reselling portions of the cloud, with a subdomain representing a fully-enabled cloud. This work has been split into phases, with phase 1 described below: Phase 1 of reseller Reseller (phase 1) is an extension of Hierarchical Multitenancy (HMT), described here: Creating hierarchical projects and sub-projects . Previously, keystone domains were originally intended to be containers that stored users and projects, with their own table in the database back-end. As a result, domains are now no longer stored in their own table, and have been merged into the project table: A domain is now a type of project, distinguished by the is_domain flag. A domain represents a top-level project in the project hierarchy: domains are roots in the project hierarchy APIs have been updated to create and retrieve domains using the projects subpath: Create a new domain by creating a project with the is_domain flag set to true List projects that are domains: get projects including the is_domain query parameter. 6.7. Project security management Security groups are sets of IP filter rules that can be assigned to project instances, and which define networking access to the instance. Security groups are project specific; project members can edit the default rules for their security group and add new rule sets. All projects have a default security group that is applied to any instance that has no other defined security group. Unless you change the default values, this security group denies all incoming traffic and allows only outgoing traffic from your instance. You can apply a security group directly to an instance during instance creation, or to a port on the running instance. Note You cannot apply a role-based access control (RBAC)-shared security group directly to an instance during instance creation. To apply an RBAC-shared security group to an instance you must first create the port, apply the shared security group to that port, and then assign that port to the instance. See Adding a security group to a port . Do not delete the default security group without creating groups that allow required egress. For example, if your instances use DHCP and metadata, your instance requires security group rules that allow egress to the DHCP server and metadata agent. 6.7.1. Creating a security group Create a security group so that you can configure security rules. For example, you can enable ICMP traffic, or disable HTTP requests. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Create Security Group . Enter a name and description for the group, and click Create Security Group . 6.7.2. Adding a security group rule By default, rules for a new group only provide outgoing access. You must add new rules to provide additional access. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Manage Rules for the security group that you want to edit. Click Add Rule to add a new rule. Specify the rule values, and click Add . The following rule fields are required: Rule Rule type. If you specify a rule template (for example, 'SSH'), its fields are automatically filled in: TCP: Typically used to exchange data between systems, and for end-user communication. UDP: Typically used to exchange data between systems, particularly at the application level. ICMP: Typically used by network devices, such as routers, to send error or monitoring messages. Direction Ingress (inbound) or Egress (outbound). Open Port For TCP or UDP rules, the Port or Port Range (single port or range of ports) to open: For a range of ports, enter port values in the From Port and To Port fields. For a single port, enter the port value in the Port field. Type The type for ICMP rules; must be in the range '-1:255'. Code The code for ICMP rules; must be in the range '-1:255'. Remote The traffic source for this rule: CIDR (Classless Inter-Domain Routing): IP address block, which limits access to IPs within the block. Enter the CIDR in the Source field. Security Group: Source group that enables any instance in the group to access any other group instance. 6.7.3. Deleting a security group rule Delete security group rules that you no longer require. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, click Manage Rules for the security group. Select the security group rule, and click Delete Rule . Click Delete Rule again. Note You cannot undo the delete action. 6.7.4. Deleting a security group Delete security groups that you no longer require. Procedure In the dashboard, select Project > Compute > Access & Security . On the Security Groups tab, select the group, and click Delete Security Groups . Click Delete Security Groups . Note You cannot undo the delete action.
[ "openstack domain create corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 69436408fdcb44ab9e111691f8e9216d | | name | corp | +-------------+----------------------------------+", "openstack project create private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | c50d5cf4fe2e4929b98af5abdec3fd64 | | is_domain | False | | name | private-cloud | | parent_id | 69436408fdcb44ab9e111691f8e9216d | +-------------+----------------------------------+", "openstack project create dev --parent private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | 11fccd8369824baa9fc87cf01023fd87 | | is_domain | False | | name | dev | | parent_id | c50d5cf4fe2e4929b98af5abdec3fd64 | +-------------+----------------------------------+", "openstack project create qa --parent private-cloud --domain corp +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | domain_id | 69436408fdcb44ab9e111691f8e9216d | | enabled | True | | id | b4f1d6f59ddf413fa040f062a0234871 | | is_domain | False | | name | qa | | parent_id | c50d5cf4fe2e4929b98af5abdec3fd64 | +-------------+----------------------------------+", "openstack role assignment list --project private-cloud", "openstack role list +----------------------------------+-----------------+ | ID | Name | +----------------------------------+-----------------+ | 01d92614cd224a589bdf3b171afc5488 | admin | | 034e4620ed3d45969dfe8992af001514 | member | | 0aa377a807df4149b0a8c69b9560b106 | ResellerAdmin | | 9369f2bf754443f199c6d6b96479b1fa | heat_stack_user | | cfea5760d9c948e7b362abc1d06e557f | reader | | d5cb454559e44b47aaa8821df4e11af1 | swiftoperator | | ef3d3f510a474d6c860b4098ad658a29 | service | +----------------------------------+-----------------+", "openstack role add --user user1 --user-domain corp --project private-cloud member", "openstack role add --user user1 --user-domain corp --project private-cloud member --inherited", "openstack role assignment list --effective --user user1 --user-domain corp +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | c50d5cf4fe2e4929b98af5abdec3fd64 | | False | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | 11fccd8369824baa9fc87cf01023fd87 | | True | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | b4f1d6f59ddf413fa040f062a0234871 | | True | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role remove --user user1 --project private-cloud member", "openstack role assignment list --effective --user user1 --user-domain corp +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | Role | User | Group | Project | Domain | Inherited | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+ | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | 11fccd8369824baa9fc87cf01023fd87 | | True | | 034e4620ed3d45969dfe8992af001514 | 10b5b34df21d485ca044433818d134be | | b4f1d6f59ddf413fa040f062a0234871 | | True | +----------------------------------+----------------------------------+-------+----------------------------------+--------+-----------+", "openstack role remove --user user1 --project private-cloud member --inherited", "openstack role assignment list --effective --user user1 --user-domain corp" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/users_and_identity_management_guide/assembly_managing-projects
3.3.2. Option: CPU Configuration
3.3.2. Option: CPU Configuration Use this option to select the CPU configuration type, based on the desired CPU model. Expand the list to see available options, or click the Copy host CPU configuration button to detect and apply the physical host's CPU model and configuration. Once you select a CPU configuration, its available CPU features/instructions are displayed and can be individually enabled/disabled in the CPU Features list. Refer to the following diagram which shows these options: Figure 3.5. CPU Configuration Options Note Copying the host CPU configuration is recommended over manual configuration. Note Alternately, run the virsh capabilities command on your host machine to view the virtualization capabilities of your system, including CPU types and NUMA capabilities.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sec-virt-manager-tuning-cpu-config
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.23/proc-providing-feedback-on-redhat-documentation
probe::netdev.unregister
probe::netdev.unregister Name probe::netdev.unregister - Called when the device is being unregistered Synopsis netdev.unregister Values dev_name The device that is going to be unregistered
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netdev-unregister
Part IV. Deprecated Functionality
Part IV. Deprecated Functionality This part provides an overview of functionality that has been deprecated in all minor releases of Red Hat Enterprise Linux 7 up to Red Hat Enterprise Linux 7.2. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.2_release_notes/part-red_hat_enterprise_linux-7.2_release_notes-deprecated_functionality
Chapter 6. Uninstalling OpenShift Data Foundation
Chapter 6. Uninstalling OpenShift Data Foundation 6.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledge base article on Uninstalling OpenShift Data Foundation .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_ibm_power/uninstalling_openshift_data_foundation
Release Notes
Release Notes Red Hat build of Keycloak 24.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/release_notes/index
probe::signal.checkperm.return
probe::signal.checkperm.return Name probe::signal.checkperm.return - Check performed on a sent signal completed Synopsis Values retstr Return value as a string name Name of the probe point
[ "signal.checkperm.return" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-signal-checkperm-return
Chapter 5. CloudPrivateIPConfig [cloud.network.openshift.io/v1]
Chapter 5. CloudPrivateIPConfig [cloud.network.openshift.io/v1] Description CloudPrivateIPConfig performs an assignment of a private IP address to the primary NIC associated with cloud VMs. This is done by specifying the IP and Kubernetes node which the IP should be assigned to. This CRD is intended to be used by the network plugin which manages the cluster network. The spec side represents the desired state requested by the network plugin, and the status side represents the current state that this CRD's controller has executed. No users will have permission to modify it, and if a cluster-admin decides to edit it for some reason, their changes will be overwritten the time the network plugin reconciles the object. Note: the CR's name must specify the requested private IP address (can be IPv4 or IPv6). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the definition of the desired private IP request. status object status is the observed status of the desired private IP request. Read-only. 5.1.1. .spec Description spec is the definition of the desired private IP request. Type object Property Type Description node string node is the node name, as specified by the Kubernetes field: node.metadata.name 5.1.2. .status Description status is the observed status of the desired private IP request. Read-only. Type object Required conditions Property Type Description conditions array condition is the assignment condition of the private IP and its status conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } node string node is the node name, as specified by the Kubernetes field: node.metadata.name 5.1.3. .status.conditions Description condition is the assignment condition of the private IP and its status Type array 5.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 5.2. API endpoints The following API endpoints are available: /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs DELETE : delete collection of CloudPrivateIPConfig GET : list objects of kind CloudPrivateIPConfig POST : create a CloudPrivateIPConfig /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs/{name} DELETE : delete a CloudPrivateIPConfig GET : read the specified CloudPrivateIPConfig PATCH : partially update the specified CloudPrivateIPConfig PUT : replace the specified CloudPrivateIPConfig /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs/{name}/status GET : read status of the specified CloudPrivateIPConfig PATCH : partially update status of the specified CloudPrivateIPConfig PUT : replace status of the specified CloudPrivateIPConfig 5.2.1. /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs HTTP method DELETE Description delete collection of CloudPrivateIPConfig Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind CloudPrivateIPConfig Table 5.2. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfigList schema 401 - Unauthorized Empty HTTP method POST Description create a CloudPrivateIPConfig Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body CloudPrivateIPConfig schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 201 - Created CloudPrivateIPConfig schema 202 - Accepted CloudPrivateIPConfig schema 401 - Unauthorized Empty 5.2.2. /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the CloudPrivateIPConfig HTTP method DELETE Description delete a CloudPrivateIPConfig Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CloudPrivateIPConfig Table 5.9. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CloudPrivateIPConfig Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CloudPrivateIPConfig Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body CloudPrivateIPConfig schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 201 - Created CloudPrivateIPConfig schema 401 - Unauthorized Empty 5.2.3. /apis/cloud.network.openshift.io/v1/cloudprivateipconfigs/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the CloudPrivateIPConfig HTTP method GET Description read status of the specified CloudPrivateIPConfig Table 5.16. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CloudPrivateIPConfig Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CloudPrivateIPConfig Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body CloudPrivateIPConfig schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK CloudPrivateIPConfig schema 201 - Created CloudPrivateIPConfig schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/network_apis/cloudprivateipconfig-cloud-network-openshift-io-v1
Chapter 25. Performing post-upgrade actions
Chapter 25. Performing post-upgrade actions After you have completed the overcloud upgrade, you must perform some post-upgrade configuration to ensure that your environment is fully supported and ready for future operations. 25.1. Removing unnecessary packages and directories from the undercloud After the Leapp upgrade, remove the unnecessary packages and directories that remain on the undercloud. Procedure Remove the unnecessary packages Remove the content from the /httpboot and /tftpboot directories that includes old images used in Red Hat OpenStack 13: 25.2. Deleting users of redundant telemetry services The telemetry endpoints are disabled by default. You can use this procedure to remove any telemetry endpoints that remain after the upgrade. Prerequisites You have telemetry endpoints remaining after the upgrade. You can use the procedure to identify the remaining telemetry endpoints. Procedure Log in to your undercloud and source the overcloud authentication file: Identify the telemetry endpoints that remain after the upgrade: Delete the users of the missing endpoints: Verification Verify that the endpoint users are absent: 25.3. Validating the post-upgrade functionality Run the post-upgrade validation group to check the post-upgrade functionality. Procedure Source the stackrc file. If no inventory file exists, you must generate a static inventory file: If you are not using the default overcloud stack name, replace <stack_name> with the name of your stack. Run the openstack tripleo validator run command with the --group post-upgrade option: If you are not using the default overcloud stack name, replace <stack_name> with the name of your stack. Review the results of the validation report. To view detailed output from a specific validation, run the openstack tripleo validator show run --full command against the UUID of the specific validation from the report: Important A FAILED validation does not prevent you from deploying or running Red Hat OpenStack Platform. However, a FAILED validation can indicate a potential issue with a production environment. 25.4. Upgrading the overcloud images You must replace your current overcloud images with new versions. The new images ensure that the director can introspect and provision your nodes using the latest version of OpenStack Platform software. Prerequisites You have upgraded the undercloud to the latest version. Procedure Log in to the undercloud as the stack user. Source the stackrc file. Install the packages containing the overcloud QCOW2 archives: Remove any existing images from the images directory on the stack user's home ( /home/stack/images ): Extract the archives: Import the latest images into the director: Configure your nodes to use the new images: Important When you deploy overcloud nodes, ensure that the overcloud image version corresponds to the respective heat template version. For example, use the OpenStack Platform 16.2 images only with the OpenStack Platform 16.2 heat templates. Important The new overcloud-full image replaces the old overcloud-full image. If you made changes to the old image, you must repeat the changes in the new image, especially if you want to deploy new nodes in the future. 25.5. Updating CPU pinning parameters Red Hat OpenStack Platform 16.2 uses new parameters for CPU pinning: NovaComputeCpuDedicatedSet Sets the dedicated (pinned) CPUs. NovaComputeCpuSharedSet Sets the shared (unpinned) CPUs. You must migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to the NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet parameters after completing the upgrade to Red Hat OpenStack Platform 16.2. Procedure Log in to the undercloud as the stack user. If your Compute nodes support simultaneous multithreading (SMT) but you created instances with the hw:cpu_thread_policy=isolated policy, you must perform one of the following options: Unset the hw:cpu_thread_policy thread policy and resize the instances: Source your overcloud authentication file: Unset the hw:cpu_thread_policy property of the flavor: Note Unsetting the hw:cpu_thread_policy attribute sets the policy to the default prefer policy, which sets the instance to use an SMT-enabled Compute node if available. You can also set the hw:cpu_thread_policy attribute to require , which sets a hard requirements for an SMT-enabled Compute node. If the Compute node does not have an SMT architecture or enough CPU cores with available thread siblings, scheduling will fail. To prevent this, set hw:cpu_thread_policy to prefer instead of require . The default prefer policy ensures that thread siblings are used when available. If you use hw:cpu_thread_policy=isolate , you must have SMT disabled or use a platform that does not support SMT. Convert the instances to use the new thread policy. Repeat this step for all pinned instances using the hw:cpu_thread_policy=isolated policy. Migrate instances from the Compute node and disable SMT on the Compute node: Source your overcloud authentication file: Disable the Compute node from accepting new virtual machines: Migrate all instances from the Compute node. For more information on instance migration, see Migrating virtual machine instances between Compute nodes . Reboot the Compute node and disable SMT in the BIOS of the Compute node. Boot the Compute node. Re-enable the Compute node: Source the stackrc file: Edit the environment file that contains the NovaVcpuPinSet parameter. Migrate the CPU pinning configuration from the NovaVcpuPinSet parameter to NovaComputeCpuDedicatedSet and NovaComputeCpuSharedSet : Migrate the value of NovaVcpuPinSet to NovaComputeCpuDedicatedSet for hosts that were previously used for pinned instances. Migrate the value of NovaVcpuPinSet to NovaComputeCpuSharedSet for hosts that were previously used for unpinned instances. If there is no value set for NovaVcpuPinSet, then all Compute node cores should be assigned to either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet , depending on the type of instances you intend to host on the nodes. For example, your environment file might contain the following pinning configuration: To migrate the configuration to a pinned configuration, set the NovaComputeCpuDedicatedSet parameter and unset the NovaVcpuPinSet parameter: To migrate the configuration to an unpinned configuration, set the NovaComputeCpuSharedSet parameter and unset the NovaVcpuPinSet parameter: Important Ensure the configuration of either NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet matches the configuration defined in NovaVcpuPinSet . To change the configuration for either of these, or to configure both NovaComputeCpuDedicatedSet or NovaComputeCpuSharedSet , ensure the Compute nodes with the pinning configuration are not running any instances before updating the configuration. Save the file. Run the deployment command to update the overcloud with the new CPU pinning parameters. Additional resources Configuring CPU pinning on Compute nodes 25.6. Migrating users to the member role In Red Hat OpenStack Platform 13, the default member role is called _member_ . In Red Hat OpenStack Platform 16.2, the default member role is called member . When you complete the upgrade from Red Hat OpenStack Platform 13 to Red Hat OpenStack Platform 16.2, users that you assigned to the _member_ role still have that role. You can migrate all of the users to the member role by using the following steps. Prerequisites You have upgraded the overcloud to the latest version. Procedure List all of the users on your cloud that have the _member_ role: For each user, remove the _member_ role, and apply the member role:
[ "sudo dnf -y remove --exclude=python-pycadf-common python2*", "sudo rm -rf /httpboot /tftpboot", "source ~/overcloudrc", "openstack endpoint list | grep -i -e aodh -e gnocchi -e panko", "openstack user delete aodh gnocchi panko", "openstack user list", "source ~/stackrc", "tripleo-ansible-inventory --static-yaml-inventory ~USDHOME/config-download/<stack_name>/tripleo-ansible-inventory.yaml --stack <stack_name> --ansible_ssh_user heat-admin", "openstack tripleo validator run --group {validation} --inventory ~USDHOME/config-download/<stack_name>/tripleo-ansible-inventory.yaml", "openstack tripleo validator show run --full <UUID>", "source ~/stackrc", "sudo dnf install rhosp-director-images rhosp-director-images-ipa-x86_64", "rm -rf ~/images/*", "cd ~/images for i in /usr/share/rhosp-director-images/overcloud-full-latest-16.2.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-16.2.tar; do tar -xvf USDi; done cd ~", "openstack overcloud image upload --update-existing --image-path /home/stack/images/", "openstack overcloud node configure USD(openstack baremetal node list -c UUID -f value)", "source ~/overcloudrc", "(overcloud) USD openstack flavor unset --property hw:cpu_thread_policy <flavor>", "(overcloud) USD openstack server resize --flavor <flavor> <server> (overcloud) USD openstack server resize confirm <server>", "source ~/overcloudrc", "(overcloud) USD openstack compute service list (overcloud) USD openstack compute service set <hostname> nova-compute --disable", "(overcloud) USD openstack compute service set <hostname> nova-compute --enable", "source ~/stackrc", "parameter_defaults: NovaVcpuPinSet: 1,2,3,5,6,7", "parameter_defaults: NovaComputeCpuDedicatedSet: 1,2,3,5,6,7 NovaVcpuPinSet: \"\"", "parameter_defaults: NovaComputeCpuSharedSet: 1,2,3,5,6,7 NovaVcpuPinSet: \"\"", "(undercloud) USD openstack overcloud deploy --stack _STACK NAME_ --templates -e /home/stack/templates/<compute_environment_file>.yaml", "openstack role assignment list --names --role _member_ --sort-column project", "openstack role remove --user <user> --project <project> _member_ openstack role add --user <user> --project <project> member" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/framework_for_upgrades_13_to_16.2/performing-post-upgrade-actions
Chapter 6. Security
Chapter 6. Security 6.1. Connecting with a user and password AMQ .NET can authenticate connections with a user and password. To specify the credentials used for authentication, set the user and password fields in the connection URL. Example: Connecting with a user and password Address addr = new Address("amqp:// <user> : <password> @example.com"); Connection conn = new Connection(addr); 6.2. Configuring SASL authentication Client connections to remote peers may exchange SASL user name and password credentials. The presence of the user field in the connection URI controls this exchange. If user is specified then SASL credentials are exchanged; if user is absent then the SASL credentials are not exchanged. By default the client supports EXTERNAL , PLAIN , and ANONYMOUS SASL mechanisms. 6.3. Configuring an SSL/TLS transport Secure communication with servers is achieved using SSL/TLS. A client may be configured for SSL/TLS Handshake only or for SSL/TLS Handshake and client certificate authentication. See the Managing Certificates section for more information. Note TLS Server Name Indication (SNI) is handled automatically by the client library. However, SNI is signaled only for addresses that use the amqps transport scheme where the host is a fully qualified domain name or a host name. SNI is not signaled when the host is a numeric IP address.
[ "Address addr = new Address(\"amqp:// <user> : <password> @example.com\"); Connection conn = new Connection(addr);" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_.net_client/security
Appendix A. Using your subscription
Appendix A. Using your subscription Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired Streams for Apache Kafka product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2024-05-30 17:22:50 UTC
[ "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/kafka_configuration_tuning/using_your_subscription
Developing decision services in Red Hat Decision Manager
Developing decision services in Red Hat Decision Manager Red Hat Decision Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/index
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation When you deploy OpenShift Data Foundation on OpenShift Container Platform using local storage devices, you can create internal cluster resources. This approach internally provisions base services and all applications can access additional storage classes. Before you begin the deployment of Red Hat OpenShift Data Foundation using local storage, ensure that your resource requirements are met. See requirements for installing OpenShift Data Foundation using local storage devices . On the external key management system (KMS), When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . Ensure that you are using signed certificates on your Vault servers. After you have addressed the above, follow these steps in the order given: Install the Red Hat OpenShift Data Foundation Operator . Install Local Storage Operator . Find the available storage devices . Create the OpenShift Data Foundation cluster service on IBM Z . 1.1. Requirements for installing OpenShift Data Foundation using local storage devices Node requirements The cluster must consist of at least three OpenShift Container Platform worker or infrastructure nodes with locally attached-storage devices on each of them. Each of the three selected nodes must have at least one raw block device available. OpenShift Data Foundation uses the one or more available raw block devices. The devices you use must be empty, the disks must not include Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disk. For more information, see the Resource requirements section in the Planning guide . 1.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy:
[ "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_z/preparing_to_deploy_openshift_data_foundation
Chapter 121. KafkaBridge schema reference
Chapter 121. KafkaBridge schema reference Property Description spec The specification of the Kafka Bridge. KafkaBridgeSpec status The status of the Kafka Bridge. KafkaBridgeStatus
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkabridge-reference
Chapter 100. KafkaUserSpec schema reference
Chapter 100. KafkaUserSpec schema reference Used in: KafkaUser Property Description authentication Authentication mechanism enabled for this Kafka user. The supported authentication mechanisms are scram-sha-512 , tls , and tls-external . scram-sha-512 generates a secret with SASL SCRAM-SHA-512 credentials. tls generates a secret with user certificate for mutual TLS authentication. tls-external does not generate a user certificate. But prepares the user for using mutual TLS authentication using a user certificate generated outside the User Operator. ACLs and quotas set for this user are configured in the CN=<username> format. Authentication is optional. If authentication is not configured, no credentials are generated. ACLs and quotas set for the user are configured in the <username> format suitable for SASL authentication. The type depends on the value of the authentication.type property within the given object, which must be one of [tls, tls-external, scram-sha-512]. KafkaUserTlsClientAuthentication , KafkaUserTlsExternalClientAuthentication , KafkaUserScramSha512ClientAuthentication authorization Authorization rules for this Kafka user. The type depends on the value of the authorization.type property within the given object, which must be one of [simple]. KafkaUserAuthorizationSimple quotas Quotas on requests to control the broker resources used by clients. Network bandwidth and request rate quotas can be enforced.Kafka documentation for Kafka User quotas can be found at http://kafka.apache.org/documentation/#design_quotas . KafkaUserQuotas template Template to specify how Kafka User Secrets are generated. KafkaUserTemplate
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaUserSpec-reference
Chapter 4. Labels for JFR recordings
Chapter 4. Labels for JFR recordings When you create a JDK Flight Recorder (JFR) recording on Cryostat 2.4, you can add metadata to the recording by specifying a series of key-value label pairs. Additionally, you can attach custom labels to JFR recordings that are inside a target JVM, so that you can easily identify and better manage your JFR recordings. The following list details some common recording label use cases: Attach metadata to your JFR recording. Perform batch operations on recordings that contain identical labels. Use labels when running queries on recordings. You can use Cryostat to create a JFR recording that monitors the performance of your JVM in your containerized application. Additionally, you can take a snapshot of an active JFR recording to capture any collected data, up to a specific point in time, for your target JVM application. 4.1. Adding labels to JFR recordings When you create a JFR recording on Cryostat 2.4, you can use labels to add metadata that contain key-value label pairs to the recording. Cryostat applies default recording labels to a created JFR recording. These default labels capture information about the event template that Cryostat used to create the JFR recording. You can add custom labels to your JFR recording so that you can run specific queries that meet your needs, such as identifying specific JFR recordings or performing batch operations on recordings with the same applied labels. Prerequisites Logged in to your Cryostat web console. Created or selected a target JVM for your Cryostat instance. Procedure From your Cryostat web console, click Recordings . Under the Active Recordings tab, click Create . On the Custom Flight Recording tab, expand Show metadata options . Note On the Custom Flight Recording tab, you must complete any mandatory field that is marked with an asterisk. Click Add label . Figure 4.1. The Add Label button that is displayed under the Custom Flight Recording tab Enter values in the provided Key and Value fields. For example, if you want to file an issue with the recordings, you could enter the reason in the Key field and then enter the issue type in the Value field. Click Create to create your JFR recording. Your recording is then shown under the Active Recordings tab along with any specified recording labels and custom labels. Tip You can access archived JFR recordings from the Archives menu. See Uploading a JFR recording to Cryostat archives location (Using Cryostat to manage a JFR recording). Example The following example shows two default recording labels, template.name: Profiling and template.type: TARGET , and one custom label, reason:service-outage . Figure 4.2. Example of an active recording with defined recording labels and a custom label 4.2. Editing a label for your JFR recording On the Cryostat web console, you can navigate to the Recordings menu and then edit a label and its metadata for your JFR recording. You can also edit the label and metadata for a JFR recording that you uploaded to archives. Prerequisites Logged in to your Cryostat web console. Created a JFR recording and attach labels to this recording. Procedure On your Cryostat web console, click the Recording menu. From the Active Recordings tab, locate your JFR recording, and then select the checkbox to it. Click Edit Labels . An Edit Recording Label pane opens in your Cryostat web console, which you can use to add, edit, or delete labels for your JFR recording. Tip You can select multiple JFR recordings by selecting the checkbox that is to each recording. Click the Edit Labels button if you want to bulk edit recordings that contain the same labels or add new identical labels to multiple recordings. Optional : You can perform any of the following actions from the Edit Recording Labels pane: Click Add to create a label. Delete a label by clicking the X to the label. Edit a label by modifying any content in a field. After you edit content, a green tick is shown in the field to indicate an edit. Click Save . Optional : You can archive your JFR recordings along with their labels by completing the following steps: Select the checkbox to the recording's name. Click the Archive button. You can locate your recording under the Archived Recordings tab. By archiving your recording with its labels, you can enhance your search capabilities when you want to locate the recording at a later stage. You can also add additional labels to any recording that you uploaded to the Cryostat archives. Note Cryostat preserves any labels with the recording for the lifetime of the archived recording. Verification From the Active Recordings tab, check that your changes display under the Labels section for your recording. Additional resources Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording) Revised on 2023-12-12 17:44:20 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/creating_a_jfr_recording_with_cryostat/assembly_labels-jfr-recordings_cryostat
Chapter 8. Interoperability
Chapter 8. Interoperability This chapter discusses how to use AMQ Ruby in combination with other AMQ components. For an overview of the compatibility of AMQ components, see the product introduction . 8.1. Interoperating with other AMQP clients AMQP messages are composed using the AMQP type system . This common format is one of the reasons AMQP clients in different languages are able to interoperate with each other. When sending messages, AMQ Ruby automatically converts language-native types to AMQP-encoded data. When receiving messages, the reverse conversion takes place. Note More information about AMQP types is available at the interactive type reference maintained by the Apache Qpid project. Table 8.1. AMQP types AMQP type Description null An empty value boolean A true or false value char A single Unicode character string A sequence of Unicode characters binary A sequence of bytes byte A signed 8-bit integer short A signed 16-bit integer int A signed 32-bit integer long A signed 64-bit integer ubyte An unsigned 8-bit integer ushort An unsigned 16-bit integer uint An unsigned 32-bit integer ulong An unsigned 64-bit integer float A 32-bit floating point number double A 64-bit floating point number array A sequence of values of a single type list A sequence of values of variable type map A mapping from distinct keys to values uuid A universally unique identifier symbol A 7-bit ASCII string from a constrained domain timestamp An absolute point in time Table 8.2. AMQ Ruby types before encoding and after decoding AMQP type AMQ Ruby type before encoding AMQ Ruby type after decoding null nil nil boolean true, false true, false char - String string String String binary - String byte - Integer short - Integer int - Integer long Integer Integer ubyte - Integer ushort - Integer uint - Integer ulong - Integer float - Float double Float Float array - Array list Array Array map Hash Hash symbol Symbol Symbol timestamp Date, Time Time Table 8.3. AMQ Ruby and other AMQ client types (1 of 2) AMQ Ruby type before encoding AMQ C++ type AMQ JavaScript type nil nullptr null true, false bool boolean String std::string string Integer int64_t number Float double number Array std::vector Array Hash std::map object Symbol proton::symbol string Date, Time proton::timestamp number Table 8.4. AMQ Ruby and other AMQ client types (2 of 2) AMQ Ruby type before encoding AMQ .NET type AMQ Python type nil null None true, false System.Boolean bool String System.String unicode Integer System.Int64 long Float System.Double float Array Amqp.List list Hash Amqp.Map dict Symbol Amqp.Symbol str Date, Time System.DateTime long 8.2. Interoperating with AMQ JMS AMQP defines a standard mapping to the JMS messaging model. This section discusses the various aspects of that mapping. For more information, see the AMQ JMS Interoperability chapter. JMS message types AMQ Ruby provides a single message type whose body type can vary. By contrast, the JMS API uses different message types to represent different kinds of data. The table below indicates how particular body types map to JMS message types. For more explicit control of the resulting JMS message type, you can set the x-opt-jms-msg-type message annotation. See the AMQ JMS Interoperability chapter for more information. Table 8.5. AMQ Ruby and JMS message types AMQ Ruby body type JMS message type String TextMessage nil TextMessage - BytesMessage Any other type ObjectMessage 8.3. Connecting to AMQ Broker AMQ Broker is designed to interoperate with AMQP 1.0 clients. Check the following to ensure the broker is configured for AMQP messaging: Port 5672 in the network firewall is open. The AMQ Broker AMQP acceptor is enabled. See Default acceptor settings . The necessary addresses are configured on the broker. See Addresses, Queues, and Topics . The broker is configured to permit access from your client, and the client is configured to send the required credentials. See Broker Security . 8.4. Connecting to AMQ Interconnect AMQ Interconnect works with any AMQP 1.0 client. Check the following to ensure the components are configured correctly: Port 5672 in the network firewall is open. The router is configured to permit access from your client, and the client is configured to send the required credentials. See Securing network connections .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_ruby_client/interoperability
17.5. Debug the Application
17.5. Debug the Application To debug, or examine the source code, of the quickstart or any of its associated libraries run either of the following commands to pull them into the local repository: Report a bug
[ "mvn dependency:sources mvn dependency:resolve -Dclassifier=javadoc" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/eap_cluster_app_debugging_the_cluster_app_quickstart
Chapter 1. Using groups in Directory Server
Chapter 1. Using groups in Directory Server You can add users to groups in Directory Server. Groups are one of the mechanisms to group directory entries, that simplifies management of the user accounts. When you use a group, Directory Server stores the distinguished name (DN) of the users who are members of this group in a membership attribute of the group entry. This special attribute is defined by the object class you choose when creating a group entry. For details about the group types, see Group types in Directory Server . Groups are faster than roles. However, for a group to have benefits of a role, you need to enable the MemberOf plug-in. By default, the MemberOf plug-in automatically adds the memberOf attribute to a user entry if this user is a member of the group. As a result, the information about the membership is stored in both the group and user entries. For details about the MemberOf plug-in, see Listing group membership in user entries . 1.1. Group types in Directory Server In Directory Server, you can add members to a static or dynamic group. For more details about definition of each group type, see About groups in Directory Server . A group object class defines a membership attribute, and to add a member to the group, you need to add a value to this membership attribute of the group entry. The following table lists group object classes and corresponding membership attributes. Group type Object class Membership attribute Static groupOfNames member groupOfUniqueNames uniqueMember Dynamic groupOfURLs memberURL groupOfCertificates memberCertificate Object classes that you can use when you create a group: groupOfNames is a simple group. You can add any entry to this group. The member attribute determines the group membership. The member attribute values are distinguished names (DN) of user entries that are members of the group. groupOfUniqueNames lists user DNs as members, however the DNs must be unique. This group prevents self-referential group memberships. The uniqueMember attribute determines the group membership. groupOfURLs uses a list of LDAP URLs to filter and create its membership list. Any dynamic group requires this object class and you can use it in conjunction with groupOfNames and groupOfUniqueNames . The memberURL attribute determines the group membership. groupOfCertificates uses an LDAP filter to search for certificate names to identify group members. Use the groupOfCertificates object class for group-based access control, because you can give special access permissions to this group. The memberCertificateDescription attribute determines the group membership. Important If you use an object class of a static group together with one of the dynamic object classes, the group becomes dynamic. The MemberOf plug-in does not support dynamic groups. Therefore, the plug-in does not add the memberOf attribute to the user entry if the user entry matches the filter of a dynamic group. 1.2. Creating a static group You can create a static group by using the command line or the web console. 1.2.1. Creating a static group using the command line Use the dsidm utility to create a static group with the groupOfNames object class. Use the ldapmodify utility to create a static group with the groupOfUniqueNames object class. The following example creates two static groups in the ou=groups,dc=example,dc=com entry. Prerequisites The ou=groups,dc=example,dc=com parent entry exists. Procedure To create cn=simple_group group with the groupOfNames object class, run: Note that the dsidm group create command creates groups only in the ou=group sub-entry. If you want to create a group in another entry, use ldapmodify utility. To create cn=unique_members_group group with the groupOfUniqueNames object class, run: Verification Use dsidm group list command to list groups with the the groupOfNames object class: Use dsidm uniquegroup list command to list groups with the unique members: steps Adding members to a static group . 1.2.2. Creating a static group in the LDAP Browser You can use the web console to create a static group. The following example creates a static_group in the ou=groups,dc=example,dc=com parent entry. Prerequisites The ou=groups,dc=example,dc=com parent entry exists. You have permissions to log in to the instance in the web console. For more information about logging in to the web console, see Logging in to the Directory Server by using the web console . Procedure Navigate to LDAP Browser menu. Using the Tree or Table view, expand the parent entry ou=groups,dc=example,dc=com under which you want to create the group. Click the Options menu (⫶) and select New to open the wizard window. Select the Create a group and click . Select the Basic Group for the groupe type and click . Add the group name, group description, and select the membership attribute for the group: member for the group with the groupOfNames object class. uniquemember for the group with the groupOfUniqueNames object class. Click . Optional: Add members to the group and click . Verify the group information, click Create , and Finish . Verification Expand the newly created group entry in the suffix tree. 1.3. Adding members to static groups You can add a member to a group by using the command line of the web console. 1.3.1. Adding members to a static group using the command line To add a member to a static group use the ldapmodify utility. Prerequisites The group entry exists. The users entry exist. Procedure To add a member to a static group with the groupOfNames object class, add the user distinguished name (DN) as the value to the member attribute of the group entry: The command adds the uid=jsmith user to the cn=simple_group group. To add a member to a static group with the groupOfUniqueNames object class, add the user distinguished name (DN) as the value to the uniqueMember attribute of the group entry: The command adds the uid=ajonson user to the cn=unique_members_group group. Verification List the members of the group: 1.3.2. Adding members to a static group in LDAP Browser You can add a member to a static group in the web console by using LDAP Browser. Prerequisites The group entry exists. The user entry exists. You are logged in to the instance in the web console. For more details about logging in to the web console, see Logging in to the Directory Server by using the web console . Procedure Navigate to LDAP Browser menu. Using the Tree or Table view, expand the group entry to which you want to add the member. For example. you want to add a member to cn=unique_members_group,ou=groups,dc=example,dc=com . Click the Options menu (⫶) and select Edit to open the wizard window. The window displays the current members list. Select Find New Members tab. Type the part of the uid or cn attribute value of the member in the search bar and press Enter . The Available Members field displays the user distinguished names (DN) that you can add to the group. Select the member DN and move it to the Chosen Members field by click on the arrow (>). Click Add Member button. Verification Expand the cn=unique_members_group,ou=groups,dc=example,dc=com group entry and find the added user in the entry details. 1.4. Creating a dynamic group using the command line Directory Server supports creating dynamic groups by using only the command line. Use the ldapmodify utility to create a dynamic group with the groupOfURLs and groupOfCertificates object classes. The following example creates two dynamic groups in the ou=groups,dc=example,dc=com entry. Prerequisites The ou=groups,dc=example,dc=com parent entry exists. Procedure To create cn=example_dynamic_group group with the groupOfURLs object class, run: The command creates a dynamic group that filters members with the person object class and the sen substring in the right part of the common name ( cn ) value. To create cn=example_certificates_group group with the groupOfCertificates object class, run: The command creates a dynamic group that filters members whose certificate subject DNs contain ou=people,l=USA,dc=example,dc=com . Verification Search for the newly created group with the groupOfURLs object class: Search for the newly created group with the groupOfCertificates object class: Additional resources Adding an LDAP entry using the command line 1.5. Listing group membership in user entries A group defines entries that belong to this group by using the membership attribute. It is easy to look at the group and find its members. For example, a static group with the groupOfNames object class stores distinguished names (DNs) of its members as values of the member attribute. However, you cannot quickly find out what groups a single user belongs to. With groups, a user entry does not contain anything that indicates the user memberships, unlike with roles. To solve this problem, you can use the MemberOf plug-in. The MemberOf plug-in analyzes the membership attribute in a group entry and automatically writes the memberOf attribute in the user entry that points to the group. By default, the plug-in checks the member attribute in the groups, however, you can use several attributes to support different group types. When you add or delete a member of a group, the plug-in updates the memberOf attributes in the user entries. With the MemberOf plug-in, you can do a simple search against a specific user entry to find all groups that the user is a member of. The MemberOf Plug-in shows direct and indirect memberships for all groups. Important The MemberOf plug-in manages membership attributes only for static groups. Additional resources Group types in Directory Server 1.5.1. Considerations when using the MemberOf plug-in When using the MemberOf plug-in, consider the following: The MemberOf plug-in in a replication topology In a replication topology, you can manage the MemberOf plug-in in two ways: Enable the MemberOf plug-in on all supplier and consumer servers in the topology. In this case, you must exclude the memberOf attribute of user entries from replication in all replication agreements. Enable the MemberOf plug-in only on all supplier servers in the topology. To do this: You must disable replication of the memberOf attribute to all write-enabled suppliers in the replication agreement. You must enable replication of the memberOf attribute to all consumer replicas in their replication agreement. You must disable the MemberOf plug-in on consumer replicas. The MemberOf plug-in with distributed databases As described in Creating and maintaining databases , you can store sub-trees of your directory in separate databases. By default, the MemberOf plug-in only updates user entries that are stored within the same database as the group. To update users across all databases, you must set the memberOfAllBackends parameter to on . For more details about setting the memberOfAllBackends parameter, see Configuring the MemberOf plug-in on each server using the web console . 1.5.2. Required object classes for the MemberOf plug-in By default, the MemberOf plug-in adds the nsMemberOf object class to user entries to provide the memberOf attribute. The nsMemberOf object class is sufficient for the plug-in to work correctly. Alternatively, you can create user entries that contain the inetUser , inetAdmin , inetOrgPerson object class. These object classes support the memberOf attribute. To configure nested groups, the group must use the extensibleObject object class. Note If directory entries do not contain an object class that supports required attributes operations fail with the following error: 1.5.3. The MemberOf plug-in syntax When configuring the MemberOf plug-in, you set the main two attributes: memberOfGroupAttr . Defines which membership attribute to poll from the group entry. The memberOfGroupAttr attribute is multi-valued. Therefore, the plug-in can manage multiple types of groups. By default, the plug-in polls the member attribute. memberOfAttr . Defines which membership attribute to create and manage in the member's user entry. By default, the plug-in adds the memberOf attribute to the user entry. In addition, the plug-in syntax provides the plug-in path, function to identify the MemberOf plug-in, the plug-in state, and other configuration parameters. The following example shows the default MemberOf plug-in entry configuration: For details about the parameters in the example and other parameters you can set, see MemberOf plug-in section in the "Configuration and schema reference" documentation. 1.5.4. Enabling the MemberOf plug-in You can enable the MemberOf plug-in by using the command line or the web console. 1.5.4.1. Enabling the MemberOf plug-in using the command line Use the dsconf utility to enable the MemberOf plug-in. Procedure Enable the plug-in: Restart the instance: Verification View the plug-in configuration details: Additional resources Configuring the MemberOf plug-in on each server 1.5.4.2. Enabling the MemberOf plug-in using the web console You can use the web console to enable the MemberOf plug-in. Prerequisites You are logged in to the instance in the web console. For more details about logging in to the web console, see Logging in to the Directory Server by using the web console . Procedure Navigate to the Plugins menu. Select the MemberOf plug-in in the list of plug-ins. Change the status to ON to enable the plug-in. Restart the instance. For instructions for restarting the instance, see Starting and stopping a Directory Server instance by using the web console . Additional resources Configuring the MemberOf plug-in on each server using the web console 1.5.5. Configuring the MemberOf plug-in on each server If you do not want to replicate the configuration of the MemberOf plug-in, configure the plug-in manually on each server by using the command line or the web console. 1.5.5.1. Configuring the MemberOf plug-in on each server using the command line By default, the MemberOf plug-in reads the member membership attribute from the group entries and adds the memberOf attribute to the user entries. However, you can configure the plug-in to read other membership attribute from the group, add another attribute to the user entry, skip nested groups, work on all databases and other settings. For example, you want the MemberOf plug-in to do the following: Read uniqueMember attribute from group entries to identify membership. Skip nested groups. Search for user entries in all databases. Prerequisites You enabled the MemberOf plug-in. For details, see Enabling the MemberOf plug-in . Procedure Optionally: Display the MemberOf plug-in configuration to see which membership attribute the plug-in currently reads from groups entries: The plug-in currently reads the member attribute from the group entry to retrieve members. Set the uniqueMember attribute as the value to the memberOfGroupAttr parameter in the plug-in configuration: The memberOfGroupAttr parameter is multi-valued and you can set several values by passing them all to the --groupattr parameter. For example: In an environment that uses distributed databases, configure the plug-in to search user entries in all databases instead of only the local database: The command sets the memberOfAllBackends parameter. Configure the plug-in to skip nested groups: The command sets the memberOfSkipNested parameter. Optional: By default, the plug-in adds nsMemberOf object class to user entries if the user entries do not have the object class that allows the memberOf attribute. To configure the plug-in to add the inetUser object class to the user entries instead of nsMemberOf , run: The command sets the memberOfAutoAddOC parameter. Restart the instance: Verification View the MemberOf plug-in configuration: Additional resources Updating the memberOf attribute values in user entries using the fixup task Sharing the MemberOf plug-in configuration between servers Setting the scope of the MemberOf plug-in Required object classes for the MemberOf plug-in 1.5.5.2. Configuring the MemberOf plug-in on each server using the web console By default, the MemberOf plug-in reads the member membership attribute from the group entries and adds the memberOf attribute to the user entries. However, you can configure the plug-in to read other membership attribute from the group, skip nested groups, work on all databases and other settings by using the web console. For example, you want the MemberOf plug-in to do the following: Read member and uniqueMember attributes from group entries to identify membership. Set the scope of the plug-in to dc=example,dc=com . Skip nested groups. Search for user entries in all databases. Prerequisites You are logged in to the instance in the web console. For more details about logging in to the web console, see Logging in to the Directory Server by using the web console . You enabled the MemberOf plug-in. For details, see Enabling the MemberOf plug-in . Procedure Navigate to LDAP Browser menu. Select the MemberOf plug-in from the plug-ins list. Add the uniqueMember attribute to the Group Attribute field. Set the scope of the plug-in to dc=example,dc=com : Enter dc=example,dc=com to the Subtree Scope field. Click Create "dc=example,dc=com" in the drop-down list. Optional: Set a subtree to exclude. For example, you do not want the plug-in to work on the ou=private,dc=example,dc=com subtree: Enter ou=private,dc=example,dc=com to the Exclude Subtree field. Click Create "ou=private,dc=example,dc=com" in the drop-down list. Check All Backends to configure the plug-in to search user entries in all databases instead of only the local database. Check Skip Nested to configure the plug-in to skip nested groups. Click Save Config . Additional resources Sharing the MemberOf plug-in configuration between servers Setting the scope of the MemberOf plug-in using the command line MemberOf plug-in Updating the memberOf attribute values in user entries using the fixup task 1.5.6. Sharing the MemberOf plug-in configuration between servers By default, each server stores its own configuration of the MemberOf plug-in. With the shared configuration of the plug-in, you can use the same settings without configuring the plug-in manually on each server. Directory Server stores the shared configuration outside of the cn=config suffix and replicates it. For example, you want to store the plug-in shared configuration in the cn=shared_MemberOf_config,dc=example,dc=com entry. Important After enabling the shared configuration, the plug-in ignores all parameters set in the cn=MemberOf Plugin,cn=plugins,cn=config plug-in entry and only uses settings from the shared configuration entry. Prerequisites You enabled the MemberOf plug-in on all servers in the replication topology. For details, see Enabling the MemberOf plug-in . Procedure Enable the shared configuration entry on a server: The command sets nsslapd-pluginConfigArea attribute value to cn=shared_MemberOf_config,dc=example,dc=com . Restart the instance: Enable the shared configuration on other servers in the replication topology that should use the shared configuration: Set the distinguished name (DN) of the configuration entry that stores the shared configuration: Restart the instance: Verification Check that the MemberOf plug-in uses the shared configuration: Optional: Check the shared configuration settings: Additional resources nsslapd-pluginConfigArea 1.5.7. Setting the scope of the MemberOf plug-in If you configured several backends or multiple-nested suffixes, you can use the memberOfEntryScope and memberOfEntryScopeExcludeSubtree parameters to set what suffixes the MemberOf plug-in works on. If you add a user to a group, the MemberOf plug-in only adds the memberOf attribute to the group if both the user and the group are in the plug-in's scope. For example, the following procedure configures the MemberOf plug-in to work on all entries in dc=example,dc=com , but to exclude entries in ou=private,dc=example,dc=com . Prerequisites You enabled the MemberOf plug-in on all servers in the replication topology. For details, see Enabling the MemberOf plug-in . Procedure Set the scope value for the MemberOf plug-in to dc=example,dc=com : Exclude entries in ou=private,dc=example,dc=com : If you moved a user entry out of the scope by using the --scope DN parameter: The MemberOf plug-in updates the membership attribute, such as member , in the group entry to remove the user DN value. The MemberOf plug-in updates the memberOf attribute in the user entry to remove the group DN value. Note The value set in the --exclude parameter has a higher priority than values set in --scope . If the scopes set in both parameters overlap, the MemberOf plug-in only works on the non-overlapping directory entries. For details about setting the scope for the MemberOf plug-in, see Configuring the MemberOf plug-in on each server using the web console . 1.5.8. Updating the memberOf attribute values in user entries using the fixup task The MemberOf plug-in automatically manages memberOf attributes in group member entries based on the configuration in the group entry. However, you need to run the fixup task in the following situations to avoid inconsistency between the memberOf configuration that the server plug-in manages and the actual memberships defined in user entries: You added group members to a group before you enabled the MemberOf plug-in. You manually edited the memberOf attribute in a user entry. You imported or replicated new user entries to the server that already have the memberOf attribute. Note that you can run the fixup tasks only locally. In a replication environment, Directory Server updates the memberOf attribute for entries on other servers after Directory Server replicates the updated entries. Prerequisites You enabled the MemberOf plug-in on all servers in the replication topology. For details, see Enabling the MemberOf plug-in . Procedure For example, to update the memberOf values in dc=example,dc=com entry and subentries, run: By default, the fixup task updates memberOf values in all entries that contain the inetUser , inetAdmin , or nsMemberOf object class. If you want the fixup task to also work on entries that contain other object classes, use -f filter option: This fixup task updates memberOf values in all entries that contain the inetUser , inetAdmin , nsMemberOf , or inetOrgPerson object class. Additional resources LDAP search filters
[ "dsidm -D \"cn=Directory Manager\" ldap://server.example.com -b \"dc=example,dc=com\" group create --cn \"simple_group\" Successfully created simple_group", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn=unique_members_group,ou=groups,dc=example,dc=com changetype: add objectClass: top objectClass: groupOfUniqueNames cn: unique_members_group description: A static group with unique members adding new entry \"cn=unique_members_group,ou=groups,dc=example,dc=com\"", "dsidm --basedn \"dc=example,dc=com\" instance_name group list simple_group", "dsidm --basedn \"dc=example,dc=com\" instance_name uniquegroup list unique_members_group", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn=simple_group,ou=groups,dc=example,dc=com changetype: modify add: member member: uid=jsmith,ou=people,dc=example,dc=com modifying entry \"cn=simple_group,ou=groups,dc=example,dc=com\"", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn=unique_members_group,ou=groups,dc=example,dc=com changetype: modify add: uniqueMember uniqueMember: uid=ajonson,ou=people,dc=example,dc=com modifying entry \"cn=unique_members_group,ou=groups,dc=example,dc=com\"", "ldapsearch -xLL -D \"cn=Directory Manager\" -W -b dc=example,dc=com \"(cn=simple_group)\" dn: cn=simple_group,ou=Groups,dc=example,dc=com objectClass: top objectClass: groupOfNames objectClass: nsMemberOf cn: simple_group member: uid=jsmith,ou=people,dc=example,dc=com member: uid=mtomson,ou=people,dc=example,dc=com", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn=example_dynamic_group,ou=groups,dc=example,dc=com changetype: add objectClass: top objectClass: groupOfURLs cn: example_dynamic_group description: Example dynamic group for user entries memberURL: ldap:///dc=example,dc=com??sub?(&(objectclass=person)(cn=*sen)) adding new entry \"cn=example_dynamic_group,ou=groups,dc=example,dc=com\"", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: cn=example_certificates_group,ou=groups,dc=example,dc=com changetype: add objectClass: top objectClass: groupOfCertificates cn: example_certificates_group description: Example dynamic group for certificate entries memberCertificateDescription: {ou=people, l=USA, dc=example, dc=com} adding new entry \"cn=example_certificates_group,ou=groups,dc=example,dc=com\"", "ldapsearch -xLLL -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \"dc=example,dc=com\" \"objectClass=groupOfURLs\" 1.1 dn: cn=example_dynamic_group,ou=groups,dc=example,dc=com", "ldapsearch -xLLL -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \"dc=example,dc=com\" \"objectClass=groupOfCertificates\" 1.1 dn: cn=example_certificates_group,ou=groups,dc=example,dc=com", "LDAP: error code 65 - Object Class Violation", "dn: cn=MemberOf Plugin,cn=plugins,cn=config cn: MemberOf Plugin memberofallbackends: off memberofattr: memberOf memberofentryscope: dc=example,dc=com memberofgroupattr: member memberofskipnested: off nsslapd-plugin-depends-on-type: database nsslapd-pluginDescription: memberof plugin nsslapd-pluginEnabled: off nsslapd-pluginId: memberof nsslapd-pluginInitfunc: memberof_postop_init nsslapd-pluginPath: libmemberof-plugin nsslapd-pluginType: betxnpostoperation nsslapd-pluginVendor: 389 Project nsslapd-pluginVersion: 2.4.5 objectClass: top objectClass: nsSlapdPlugin objectClass: extensibleObject", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof enable", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap:// server.example.com plugin memberof show dn: cn=MemberOf Plugin,cn=plugins,cn=config nsslapd-pluginEnabled: on", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof show memberofgroupattr: member", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --groupattr uniqueMember", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --groupattr member uniqueMember", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --allbackends on", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --skipnested on", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --autoaddoc inetUser", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof show dn: cn=MemberOf Plugin,cn=plugins,cn=config cn: MemberOf Plugin memberofallbackends: on memberofattr: memberOf memberofautoaddoc: inetuser memberofentryscope: dc=example,dc=com memberofgroupattr: uniqueMember memberofskipnested: on nsslapd-pluginEnabled: on", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof config-entry add \"cn=shared_MemberOf_config,dc=example,dc=com\" --attr memberOf --groupattr member Successfully created the cn=shared_MemberOf_config,dc=example,dc=com MemberOf attribute nsslapd-pluginConfigArea (config-entry) was set in the main plugin config", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server2.example.com plugin memberof set --config-entry cn=shared_MemberOf_config,dc=example,dc=com", "dsctl instance_name restart", "dsconf -D \"cn=Directory Manager\" ldap://server1.example.com plugin memberof show dn: cn=MemberOf Plugin,cn=plugins,cn=config cn: MemberOf Plugin nsslapd-pluginConfigArea: cn=shared_MemberOf_config,dc=example,dc=com", "dsconf -D \"cn=Directory Manager\" ldap://server1.example.com plugin memberof config-entry show \"cn=shared_MemberOf_config,dc=example,dc=com\" dn: cn=shared_MemberOf_config,dc=example,dc=com cn: shared_MemberOf_config memberofattr: memberOf memberofgroupattr: member objectClass: top objectClass: extensibleObject", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --scope \"dc=example,dc=com\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof set --exclude \"ou=private,dc=example,com\"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof fixup \"dc=example,dc=com\" Attempting to add task entry Successfully added task entry", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com plugin memberof fixup -f \"(|(objectclass=inetuser)(objectclass=inetadmin)(objectclass=nsmemberof)(objectclass=nsmemberof)(objectclass=inetOrgPerson))\" \"dc=example,dc=com\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/user_management_and_authentication/using-groups-in-directory-server_user-management-and-authentication
Autoscale APIs
Autoscale APIs OpenShift Container Platform 4.14 Reference guide for autoscale APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/autoscale_apis/index
Chapter 3. Configuring IAM for IBM Cloud VPC
Chapter 3. Configuring IAM for IBM Cloud VPC In environments where the cloud identity and access management (IAM) APIs are not reachable, you must put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. Storing an administrator-level credential secret in the cluster kube-system project is not supported for IBM Cloud; therefore, you must set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources About the Cloud Credential Operator 3.2. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys for IBM Cloud VPC 3.3. steps Installing a cluster on IBM Cloud VPC with customizations 3.4. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_ibm_cloud_vpc/configuring-iam-ibm-cloud
Chapter 9. Accessing monitoring APIs by using the CLI
Chapter 9. Accessing monitoring APIs by using the CLI In OpenShift Dedicated, you can access web service APIs for some monitoring components from the command line interface (CLI). Important In certain situations, accessing API endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data. To avoid these issues, follow these recommendations: Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds. Do not try to retrieve all metrics data through the /federate endpoint for Prometheus. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. 9.1. About accessing monitoring web service APIs You can directly access web service API endpoints from the command line for the following monitoring stack components: Prometheus Alertmanager Thanos Ruler Thanos Querier Important To access Thanos Ruler and Thanos Querier service APIs, the requesting account must have get permission on the namespaces resource, which can be granted by binding the cluster-monitoring-view cluster role to the account. When you access web service API endpoints for monitoring components, be aware of the following limitations: You can only use bearer token authentication to access API endpoints. You can only access endpoints in the /api path for a route. If you try to access an API endpoint in a web browser, an Application is not available error occurs. To access monitoring features in a web browser, use the OpenShift Dedicated web console to review monitoring dashboards. Additional resources Reviewing monitoring dashboards 9.2. Accessing a monitoring web service API The following example shows how to query the service API receivers for the Alertmanager service used in core platform monitoring. You can use a similar method to access the prometheus-k8s service for core platform Prometheus and the thanos-ruler service for Thanos Ruler. Prerequisites You are logged in to an account that is bound against the monitoring-alertmanager-edit role in the openshift-monitoring namespace. You are logged in to an account that has permission to get the Alertmanager API route. Note If your account does not have permission to get the Alertmanager API route, a cluster administrator can provide the URL for the route. Procedure Extract an authentication token by running the following command: USD TOKEN=USD(oc whoami -t) Extract the alertmanager-main API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath='{.status.ingress[].host}') Query the service API receivers for Alertmanager by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v2/receivers" 9.3. Querying metrics by using the federation endpoint for Prometheus You can use the federation endpoint for Prometheus to scrape platform and user-defined metrics from a network location outside the cluster. To do so, access the Prometheus /federate endpoint for the cluster via an OpenShift Dedicated route. Important A delay in retrieving metrics data occurs when you use federation. This delay can affect the accuracy and timeliness of the scraped metrics. Using the federation endpoint can also degrade the performance and scalability of your cluster, especially if you use the federation endpoint to retrieve large amounts of metrics data. To avoid these issues, follow these recommendations: Do not try to retrieve all metrics data via the federation endpoint for Prometheus. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. Avoid frequent querying of the federation endpoint for Prometheus. Limit queries to a maximum of one every 30 seconds. If you need to forward large amounts of data outside the cluster, use remote write instead. For more information, see the Configuring remote write storage section. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-monitoring-view cluster role or have obtained a bearer token with get permission on the namespaces resource. Note You can only use bearer token authentication to access the Prometheus federation endpoint. You are logged in to an account that has permission to get the Prometheus federation route. Note If your account does not have permission to get the Prometheus federation route, a cluster administrator can provide the URL for the route. Procedure Retrieve the bearer token by running the following the command: USD TOKEN=USD(oc whoami -t) Get the Prometheus federation route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath='{.status.ingress[].host}') Query metrics from the /federate route. The following example command queries up metrics: USD curl -G -k -H "Authorization: Bearer USDTOKEN" https://USDHOST/federate --data-urlencode 'match[]=up' Example output # TYPE up untyped up{apiserver="kube-apiserver",endpoint="https",instance="10.0.143.148:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035322214 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.148.166:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035338597 up{apiserver="kube-apiserver",endpoint="https",instance="10.0.173.16:6443",job="apiserver",namespace="default",service="kubernetes",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-k8s-0"} 1 1657035343834 ... 9.4. Accessing metrics from outside the cluster for custom applications You can query Prometheus metrics from outside the cluster when monitoring your own services with user-defined projects. Access this data from outside the cluster by using the thanos-querier route. This access only supports using a bearer token for authentication. Prerequisites You have deployed your own service, following the "Enabling monitoring for user-defined projects" procedure. You are logged in to an account with the cluster-monitoring-view cluster role, which provides permission to access the Thanos Querier API. You are logged in to an account that has permission to get the Thanos Querier API route. Note If your account does not have permission to get the Thanos Querier API route, a cluster administrator can provide the URL for the route. Procedure Extract an authentication token to connect to Prometheus by running the following command: USD TOKEN=USD(oc whoami -t) Extract the thanos-querier API route URL by running the following command: USD HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}') Set the namespace to the namespace in which your service is running by using the following command: USD NAMESPACE=ns1 Query the metrics of your own services in the command line by running the following command: USD curl -H "Authorization: Bearer USDTOKEN" -k "https://USDHOST/api/v1/query?" --data-urlencode "query=up{namespace='USDNAMESPACE'}" The output shows the status for each application pod that Prometheus is scraping: The formatted example output { "status": "success", "data": { "resultType": "vector", "result": [ { "metric": { "__name__": "up", "endpoint": "web", "instance": "10.129.0.46:8080", "job": "prometheus-example-app", "namespace": "ns1", "pod": "prometheus-example-app-68d47c4fb6-jztp2", "service": "prometheus-example-app" }, "value": [ 1591881154.748, "1" ] } ], } } Note The formatted example output uses a filtering tool, such as jq , to provide the formatted indented JSON. See the jq Manual (jq documentation) for more information about using jq . The command requests an instant query endpoint of the Thanos Querier service, which evaluates selectors at one point in time. 9.5. Resources reference for the Cluster Monitoring Operator This document describes the following resources deployed and managed by the Cluster Monitoring Operator (CMO): Routes Services Use this information when you want to configure API endpoint connections to retrieve, send, or query metrics data. Important In certain situations, accessing endpoints can degrade the performance and scalability of your cluster, especially if you use endpoints to retrieve, send, or query large amounts of metrics data. To avoid these issues, follow these recommendations: Avoid querying endpoints frequently. Limit queries to a maximum of one every 30 seconds. Do not try to retrieve all metrics data via the /federate endpoint. Query it only when you want to retrieve a limited, aggregated data set. For example, retrieving fewer than 1,000 samples for each request helps minimize the risk of performance degradation. 9.5.1. CMO routes resources 9.5.1.1. openshift-monitoring/alertmanager-main Expose the /api endpoints of the alertmanager-main service via a router. 9.5.1.2. openshift-monitoring/prometheus-k8s Expose the /api endpoints of the prometheus-k8s service via a router. 9.5.1.3. openshift-monitoring/prometheus-k8s-federate Expose the /federate endpoint of the prometheus-k8s service via a router. 9.5.1.4. openshift-user-workload-monitoring/federate Expose the /federate endpoint of the prometheus-user-workload service via a router. 9.5.1.5. openshift-monitoring/thanos-querier Expose the /api endpoints of the thanos-querier service via a router. 9.5.1.6. openshift-user-workload-monitoring/thanos-ruler Expose the /api endpoints of the thanos-ruler service via a router. 9.5.2. CMO services resources 9.5.2.1. openshift-monitoring/prometheus-operator-admission-webhook Expose the admission webhook service which validates PrometheusRules and AlertmanagerConfig custom resources on port 8443. 9.5.2.2. openshift-user-workload-monitoring/alertmanager-user-workload Expose the user-defined Alertmanager web server within the cluster on the following ports: Port 9095 provides access to the Alertmanager endpoints. Granting access requires binding a user to the monitoring-alertmanager-api-reader role (for read-only operations) or monitoring-alertmanager-api-writer role in the openshift-user-workload-monitoring project. Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit cluster role or monitoring-edit cluster role in the project. Port 9097 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. 9.5.2.3. openshift-monitoring/alertmanager-main Expose the Alertmanager web server within the cluster on the following ports: Port 9094 provides access to all the Alertmanager endpoints. Granting access requires binding a user to the monitoring-alertmanager-view (for read-only operations) or monitoring-alertmanager-edit role in the openshift-monitoring project. Port 9092 provides access to the Alertmanager endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit cluster role or monitoring-edit cluster role in the project. Port 9097 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. 9.5.2.4. openshift-monitoring/kube-state-metrics Expose kube-state-metrics /metrics endpoints within the cluster on the following ports: Port 8443 provides access to the Kubernetes resource metrics. This port is for internal use, and no other usage is guaranteed. Port 9443 provides access to the internal kube-state-metrics metrics. This port is for internal use, and no other usage is guaranteed. 9.5.2.5. openshift-monitoring/metrics-server Expose the metrics-server web server on port 443. This port is for internal use, and no other usage is guaranteed. 9.5.2.6. openshift-monitoring/monitoring-plugin Expose the monitoring plugin service on port 9443. This port is for internal use, and no other usage is guaranteed. 9.5.2.7. openshift-monitoring/node-exporter Expose the /metrics endpoint on port 9100. This port is for internal use, and no other usage is guaranteed. 9.5.2.8. openshift-monitoring/openshift-state-metrics Expose openshift-state-metrics /metrics endpoints within the cluster on the following ports: Port 8443 provides access to the OpenShift resource metrics. This port is for internal use, and no other usage is guaranteed. Port 9443 provides access to the internal openshift-state-metrics metrics. This port is for internal use, and no other usage is guaranteed. 9.5.2.9. openshift-monitoring/prometheus-k8s Expose the Prometheus web server within the cluster on the following ports: Port 9091 provides access to all the Prometheus endpoints. Granting access requires binding a user to the cluster-monitoring-view cluster role. Port 9092 provides access to the /metrics and /federate endpoints only. This port is for internal use, and no other usage is guaranteed. 9.5.2.10. openshift-user-workload-monitoring/prometheus-operator Expose the /metrics endpoint on port 8443. This port is for internal use, and no other usage is guaranteed. 9.5.2.11. openshift-monitoring/prometheus-operator Expose the /metrics endpoint on port 8443. This port is for internal use, and no other usage is guaranteed. 9.5.2.12. openshift-user-workload-monitoring/prometheus-user-workload Expose the Prometheus web server within the cluster on the following ports: Port 9091 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. Port 9092 provides access to the /federate endpoint only. Granting access requires binding a user to the cluster-monitoring-view cluster role. This also exposes the /metrics endpoint of the Thanos sidecar web server on port 10902. This port is for internal use, and no other usage is guaranteed. 9.5.2.13. openshift-monitoring/telemeter-client Expose the /metrics endpoint on port 8443. This port is for internal use, and no other usage is guaranteed. 9.5.2.14. openshift-monitoring/thanos-querier Expose the Thanos Querier web server within the cluster on the following ports: Port 9091 provides access to all the Thanos Querier endpoints. Granting access requires binding a user to the cluster-monitoring-view cluster role. Port 9092 provides access to the /api/v1/query , /api/v1/query_range/ , /api/v1/labels , /api/v1/label/*/values , and /api/v1/series endpoints restricted to a given project. Granting access requires binding a user to the view cluster role in the project. Port 9093 provides access to the /api/v1/alerts , and /api/v1/rules endpoints restricted to a given project. Granting access requires binding a user to the monitoring-rules-edit , monitoring-edit , or monitoring-rules-view cluster role in the project. Port 9094 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. 9.5.2.15. openshift-user-workload-monitoring/thanos-ruler Expose the Thanos Ruler web server within the cluster on the following ports: Port 9091 provides access to all Thanos Ruler endpoints. Granting access requires binding a user to the cluster-monitoring-view cluster role. Port 9092 provides access to the /metrics endpoint only. This port is for internal use, and no other usage is guaranteed. This also exposes the gRPC endpoints on port 10901. This port is for internal use, and no other usage is guaranteed. 9.5.2.16. openshift-monitoring/cluster-monitoring-operator Expose the /metrics and /validate-webhook endpoints on port 8443. This port is for internal use, and no other usage is guaranteed. 9.6. Additional resources Configuring remote write storage Managing metrics Managing alerts
[ "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route alertmanager-main -ojsonpath='{.status.ingress[].host}')", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v2/receivers\"", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route prometheus-k8s-federate -ojsonpath='{.status.ingress[].host}')", "curl -G -k -H \"Authorization: Bearer USDTOKEN\" https://USDHOST/federate --data-urlencode 'match[]=up'", "TYPE up untyped up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.143.148:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035322214 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.148.166:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035338597 up{apiserver=\"kube-apiserver\",endpoint=\"https\",instance=\"10.0.173.16:6443\",job=\"apiserver\",namespace=\"default\",service=\"kubernetes\",prometheus=\"openshift-monitoring/k8s\",prometheus_replica=\"prometheus-k8s-0\"} 1 1657035343834", "TOKEN=USD(oc whoami -t)", "HOST=USD(oc -n openshift-monitoring get route thanos-querier -ojsonpath='{.status.ingress[].host}')", "NAMESPACE=ns1", "curl -H \"Authorization: Bearer USDTOKEN\" -k \"https://USDHOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\"", "{ \"status\": \"success\", \"data\": { \"resultType\": \"vector\", \"result\": [ { \"metric\": { \"__name__\": \"up\", \"endpoint\": \"web\", \"instance\": \"10.129.0.46:8080\", \"job\": \"prometheus-example-app\", \"namespace\": \"ns1\", \"pod\": \"prometheus-example-app-68d47c4fb6-jztp2\", \"service\": \"prometheus-example-app\" }, \"value\": [ 1591881154.748, \"1\" ] } ], } }" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/monitoring/accessing-third-party-monitoring-apis
Chapter 8. Scaling storage of IBM Z or IBM LinuxONE OpenShift Data Foundation cluster
Chapter 8. Scaling storage of IBM Z or IBM LinuxONE OpenShift Data Foundation cluster 8.1. Scaling up storage by adding capacity to your OpenShift Data Foundation nodes on IBM Z or IBM LinuxONE infrastructure You can add storage capacity and performance to your configured Red Hat OpenShift Data Foundation worker nodes. Note Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. Prerequisites A running OpenShift Data Foundation Platform. Administrative privileges on the OpenShift Web Console. To scale using a storage class other than the one provisioned during deployment, first define an additional storage class. See Creating storage classes and pools for details. Procedure Add additional hardware resources with zFCP disks. List all the disks. Example output: A SCSI disk is represented as a zfcp-lun with the structure <device-id>:<wwpn>:<lun-id> in the ID section. The first disk is used for the operating system. The device id for the new disk can be the same. Append a new SCSI disk. Note The device ID for the new disk must be the same as the disk to be replaced. The new disk is identified with its WWPN and LUN ID. List all the FCP devices to verify the new disk is configured. Navigate to the OpenShift Web Console. Click Operators on the left navigation bar. Select Installed Operators . In the window, click OpenShift Data Foundation Operator. In the top navigation bar, scroll right and click Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. The Raw Capacity field shows the size set during storage class creation. The total amount of storage consumed is three times this amount, because OpenShift Data Foundation uses a replica count of 3. Click Add . To check the status, navigate to Storage Data Foundation and verify that Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 8.2. Scaling out storage capacity on a IBM Z or IBM LinuxONE cluster 8.2.1. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 8.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage capacity on a cluster .
[ "lszdev", "TYPE ID ON PERS NAMES zfcp-host 0.0.8204 yes yes zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500407630c0b50a4:0x3002b03000000000 yes yes sdb sg1 qeth 0.0.bdd0:0.0.bdd1:0.0.bdd2 yes no encbdd0 generic-ccw 0.0.0009 yes no", "chzdev -e 0.0.8204:0x400506630b1b50a4:0x3001301a00000000", "lszdev zfcp-lun TYPE ID ON PERS NAMES zfcp-lun 0.0.8204:0x102107630b1b5060:0x4001402900000000 yes no sda sg0 zfcp-lun 0.0.8204:0x500507630b1b50a4:0x4001302a00000000 yes yes sdb sg1 zfcp-lun 0.0.8204:0x400506630b1b50a4:0x3001301a00000000 yes yes sdc sg2", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>", "oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm", "NODE compute-1", "oc debug node/ <node-name>", "chroot /host", "lsblk", "oc get csr", "oc adm certificate approve <Certificate_Name>", "oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"", "oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/scaling_storage_of_ibm_z_or_ibm_linuxone_openshift_data_foundation_cluster
4.17. Hewlett-Packard iLO MP
4.17. Hewlett-Packard iLO MP Table 4.18, "HP iLO (Integrated Lights Out) MP" lists the fence device parameters used by fence_ilo_mp , the fence agent for HP iLO MP devices. Table 4.18. HP iLO (Integrated Lights Out) MP luci Field cluster.conf Attribute Description Name name A name for the server with HP iLO support. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport TCP port to use for connection with the device. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Use SSH secure Indicates that the system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The Identity file for SSH. Force Command Prompt cmd_prompt The command prompt to use. The default value is 'MP>', 'hpiLO->'. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Delay (seconds) delay The number of seconds to wait before fencing is started. The default value is 0. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Figure 4.13, "HP iLO MP" shows the configuration screen for adding an HP iLO MPfence device. Figure 4.13. HP iLO MP The following command creates a fence device instance for a HP iLO MP device: The following is the cluster.conf entry for the fence_hpilo_mp device:
[ "ccs -f cluster.conf --addfencedev hpilomptest1 agent=fence_hpilo cmd_prompt=hpiLO-> ipaddr=192.168.0.1 login=root passwd=password123 power_wait=60", "<fencedevices> <fencedevice agent=\"fence_ilo_mp\" cmd_prompt=\"hpiLO-&gt;\" ipaddr=\"192.168.0.1\" login=\"root\" name=\"hpilomptest1\" passwd=\"password123\" power_wait=\"60\"/> </fencedevices>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s1-software-fence-hpilo-mp-CA
Chapter 4. Configure storage for OpenShift Container Platform services
Chapter 4. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 4.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the Container Image Registry. On AWS, it is not required to change the storage for the registry. However, it is recommended to change the storage to OpenShift Data Foundation Persistent Volume for vSphere and Bare metal platforms. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 4.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 4.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 4.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 4.3. Persistent Volume Claims attached to prometheus-k8s-* pod 4.3. Overprovision level policy control[Technology Preview] Overprovision control is a mechanism that allows you to define a quota on the amount of persistent volume claims (PVCs) consumed from a storage cluster, based on the specific application namespace. When you enable the overprovision control mechanism, it prevents you from overprovisioning the PVCs consumed from the storage cluster. OpenShift provides flexibility for defining constraints that limit the aggregated resource consumption at cluster scope with the help of OpenShift's ClusterResourceQuota . With overprovision control, a ClusteResourceQuota is initiated, and you can set the storage capacity limit for each storage class. The alarm triggers when 80% of the capacity limit is consumed. Note Overprovision level policy control is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, refer to Technology Preview Features Support Scope. For more information on deployment, refer to Product Documentation and select the deployment procedure according to the platform. Configuring the quota limit to receive overprovision control alerts Prerequisites Ensure that the OpenShift Data Foundation cluster is created. Procedure Edit the storagecluster to set the quota limit on the storage class. Remember to save before exiting the editor. Execute the following command to edit the storagecluster : <ocs_storagecluster_name> Specify the name of the storage cluster. Add the following lines to set the desired quota limit for the storage class: <desired_quota_limit> Specify a desired quota limit for the storage class, for example, 27Ti . <storage_class_name> Specify the name of the storage class for which you want to set the quota limit, for example, ocs-storagecluster-ceph-rbd . <desired_quota_name> Specify a name for the storage quota, for example, quota1 . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Label the application namespace. <desired_name> Specify a name for the application namespace, for example, quota-rbd . <desired_label> Specify a label for the storage quota, for example, storagequota1 . Ensure that the clusterresourcequota is defined. Note Expect the clusterresourcequota with the quotaName that you defined, for example, quota1 . 4.4. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 4.4.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 4.4.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites You have administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 4.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.
[ "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, e.g. 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "oc edit storagecluster -n openshift-storage <ocs_storagecluster_name>", "apiVersion: ocs.openshift.io/v1 kind: StorageCluster spec: [...] overprovisionControl: - capacity: <desired_quota_limit> storageClassName: <storage_class_name> quotaName: <desired_quota_name> selector: labels: matchLabels: storagequota: <desired_label>", "apiVersion: v1 kind: Namespace metadata: name: <desired_name> labels: storagequota: <desired_label>", "oc get clusterresourcequota -A", "oc describe clusterresourcequota -A", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/configure-storage-for-openshift-container-platform-services_rhodf
Chapter 24. Automating System Tasks
Chapter 24. Automating System Tasks You can configure Red Hat Enterprise Linux to automatically run tasks, also known as jobs : regularly at specified time using cron , see Section 24.1, "Scheduling a Recurring Job Using Cron" asynchronously at certain days using anacron , see Section 24.2, "Scheduling a Recurring Asynchronous Job Using Anacron" once at a specific time using at , see Section 24.3, "Scheduling a Job to Run at a Specific Time Using at" once when system load average drops to a specified value using batch , see Section 24.4, "Scheduling a Job to Run on System Load Drop Using batch" once on the boot, see Section 24.5, "Scheduling a Job to Run on Boot Using a systemd Unit File" This chapter describes how to perform these tasks. 24.1. Scheduling a Recurring Job Using Cron Cron is a service that enables you to schedule running a task, often called a job, at regular times. A cron job is only executed if the system is running on the scheduled time. For scheduling jobs that can postpone their execution to when the system boots up, so a job is not "lost" if the system is not running, see Section 24.3, "Scheduling a Job to Run at a Specific Time Using at" . Users specify cron jobs in cron table files, also called crontab files. These files are then read by the crond service, which executes the jobs. 24.1.1. Prerequisites for Cron Jobs Before scheduling a cron job: Install the cronie package: The crond service is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it: Start the crond service for the current session: (optional) Configure cron . For example, you can change: shell to be used when executing jobs the PATH environment variable mail addressee if a job sends emails. See the crontab (5) manual page for information on configuring cron . 24.1.2. Scheduling a Cron Job Scheduling a Job as root User The root user uses the cron table in /etc/crontab , or, preferably, creates a cron table file in /etc/cron.d/ . Use this procedure to schedule a job as root : Choose: in which minutes of an hour to execute the job. For example, use 0,10,20,30,40,50 or 0/10 to specify every 10 minutes of an hour. in which hours of a day to execute the job. For example, use 17-20 to specify time from 17:00 to 20:59. in which days of a month to execute the job. For example, use 15 to specify 15th day of a month. in which months of a year to execute the job. For example, use Jun,Jul,Aug or 6,7,8 to specify the summer months of the year. in which days of the week to execute the job. For example, use * for the job to execute independently of the day of week. Combine the chosen values into the time specification. The above example values result into this specification: 0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * Specify the user. The job will execute as if run by this user. For example, use root . Specify the command to execute. For example, use /usr/local/bin/my-script.sh Put the above specifications into a single line: Add the resulting line to /etc/crontab , or, preferably, create a cron table file in /etc/cron.d/ and add the line there. The job will now run as scheduled. For full reference on how to specify a job, see the crontab (5) manual page. For basic information, see the beginning of the /etc/crontab file: Scheduling a Job as Non-root User Non-root users can use the crontab utility to configure cron jobs. The jobs will run as if executed by that user. To create a cron job as a specific user: From the user's shell, run: This will start editing of the user's own crontab file using the editor specified by the VISUAL or EDITOR environment variable. Specify the job in the same way as in the section called "Scheduling a Job as root User" , but leave out the field with user name. For example, instead of adding add: Save the file and exit the editor. (optional) To verify the new job, list the contents of the current user's crontab file by running: Scheduling Hourly, Daily, Weekly, and Monthly Jobs To schedule an hourly, daily, weekly, or monthly job: Put the actions you want your job to execute into a shell script. Put the shell script into one of the following directories: /etc/cron.hourly/ /etc/cron.daily/ /etc/cron.weekly/ /etc/cron.monthly/ From now, your script will be executed - the crond service automatically executes any scripts present in /etc/cron.hourly , /etc/cron.daily , /etc/cron.weekly , and /etc/cron.monthly directories at their corresponding times. 24.2. Scheduling a Recurring Asynchronous Job Using Anacron Anacron , like cron , is a service that enables you to schedule running a task, often called a job, at regular times. However, anacron differs from cron in two ways: If the system is not running at the scheduled time, an anacron job is postponed until the system is running; An anacron job can run once per day at most. Users specify anacron jobs in anacron table files, also called anacrontab files. These files are then read by the crond service, which executes the jobs. 24.2.1. Prerequisites for Anacrob Jobs Before scheduling an anacron job: Verify that you have the cronie-anacron package installed: The cronie-anacron is likely to be installed already, because it is a sub-package of the cronie package. If it is not installed, use this command: The crond service is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it: Start the crond service for the current session: (optional) Configure anacron . For example, you can change: shell to be used when executing jobs the PATH environment variable mail addressee if a job sends emails. See the anacrontab (5) manual page for information on configuring anacron . Important By default, the anacron configuration includes a condition that prevents it from running if the computer is not plugged in. This setting ensures that the battery is not drained by running anacron jobs. If you want to allow anacron to run even if the computer runs on battery power, open the /etc/cron.hourly/0anacron file and comment out the following part: 24.2.2. Scheduling an Anacron Job Scheduling an anacron Job as root User The root user uses the anacron table in /etc/anacrontab . Use the following procedure to schedule a job as root . Scheduling an anacron Job as root User Choose: Frequency of executing the job. For example, use 1 to specify every day or 3 to specify once in 3 days. The delay of executing the job. For example, use 0 to specify no delay or 60 to specify 1 hour of delay. The job identifier, which will be used for logging. For example, use my.anacron.job to log the job with the my.anacron.job string. The command to execute. For example, use /usr/local/bin/my-script.sh Combine the chosen values into the job specification. Here is an example specification: Add the resulting line to /etc/anacrontab . The job will now run as scheduled. For simple job examples, see the /etc/anacrontab file. For full reference on how to specify a job, see the anacrontab (5) manual page. Scheduling Hourly, Daily, Weekly, and Monthly Jobs You can schedule daily, weekly, and monthly jobs with anacron . See the section called "Scheduling Hourly, Daily, Weekly, and Monthly Jobs" . 24.3. Scheduling a Job to Run at a Specific Time Using at To schedule a one-time task, also called a job, to run once at a specific time, use the at utility. Users specify at jobs using the at utility. The jobs are then executed by the atd service. 24.3.1. Prerequisites for At Jobs Before scheduling an at job: Install the at package: The atd service is enabled - made to start automatically at boot time - upon installation. If you disabled the service, enable it: Start the atd service for the current session: 24.3.2. Scheduling an At Job A job is always run by some user. Log in as the desired user and run: Replace time with the time specification. For details on specifying time, see the at (1) manual page and the /usr/share/doc/at/timespec file. Example 24.1. Specifying Time for At To execute the job at 15:00, run: If the specified time has passed, the job is executed at the same time the day. To execute the job on August 20 2017, run: or To execute the job 5 days from now, run: At the displayed at> prompt, enter the command to execute and press Enter: Repeat this step for every command you want to execute. Note The at> prompt shows which shell it will use: The at utility uses the shell set in user's SHELL environment variable, or the user's login shell, or /bin/sh , whichever is found first. Press Ctrl+D on an empty line to finish specifying the job. Note If the set of commands or the script tries to display information to standard output, the output is emailed to the user. Viewing Pending Jobs To view the list of pending jobs, use the atq command: Each job is listed on a separate line in the following format: The job_queue column specifies whether a job is an at or a batch job. a stands for at , b stands for batch . Non-root users only see their own jobs. The root user sees jobs for all users. Deleting a Scheduled Job To delete a scheduled job: List pending jobs with the atq command: Find the job you want to delete by its scheduled time and the user. Run the atrm command, specifying the job by its number: 24.3.2.1. Controlling Access to At and Batch You can restrict access to the at and batch commands for specific users. To do this, put user names into /etc/at.allow or /etc/at.deny according to these rules: Both access control files use the same format: one user name on each line. No white space is permitted in either file. If the at.allow file exists, only users listed in the file are allowed to use at or batch , and the at.deny file is ignored. If at.allow does not exist, users listed in at.deny are not allowed to use at or batch . The root user is not affected by the access control files and can always execute the at and batch commands. The at daemon ( atd ) does not have to be restarted if the access control files are modified. The access control files are read each time a user tries to execute the at or batch commands. 24.4. Scheduling a Job to Run on System Load Drop Using batch To schedule a one-time task, also called a job, to run when the system load average drops below the specified value, use the batch utility. This can be useful for performing resource-demanding tasks or for preventing the system from being idle. Users specify batch jobs using the batch utility. The jobs are then executed by the atd service. 24.4.1. Prerequisites for Batch Jobs The batch utility is provided in the at package, and batch jobs are managed by the atd service. Hence, the prerequisites for batch jobs are the same as for at jobs. See Section 24.3.1, "Prerequisites for At Jobs" . 24.4.2. Scheduling a Batch Job A job is always run by some user. Log in as the desired user and run: At the displayed at> prompt, enter the command to execute and press Enter: Repeat this step for every command you want to execute. Note The at> prompt shows which shell it will use: The batch utility uses the shell set in user's SHELL environment variable, or the user's login shell, or /bin/sh , whichever is found first. Press Ctrl+D on an empty line to finish specifying the job. Note If the set of commands or the script tries to display information to standard output, the output is emailed to the user. Changing the Default System Load Average Limit By default, batch jobs start when system load average drops below 0.8. This setting is kept in the atq service. To change the system load limit: To the /etc/sysconfig/atd file, add this line: Substitute x with the new load average. For example: Restart the atq service: Viewing Pending Jobs To view the list of pending jobs, use the atq command. See the section called "Viewing Pending Jobs" . Deleting a Scheduled Job To delete a scheduled job, use the atrm command. See the section called "Deleting a Scheduled Job" . Controlling Access to Batch You can also restrict the usage of the batch utility. This is done for the batch and at utilities together. See Section 24.3.2.1, "Controlling Access to At and Batch" . 24.5. Scheduling a Job to Run on Boot Using a systemd Unit File The cron , anacron , at , and batch utilities allow scheduling jobs for specific times or for when system workload reaches a certain level. It is also possible to create a job that will run during the system boot. This is done by creating a systemd unit file that specifies the script to run and its dependencies. To configure a script to run on the boot: Create the systemd unit file that specifies at which stage of the boot process to run the script. This example shows a unit file with a reasonable set of Wants= and After= dependencies: If you use this example: substitute /usr/local/bin/foobar.sh with the name of your script modify the set of After= entries if necessary For information on specifying the stage of boot, see Section 10.6, "Creating and Modifying systemd Unit Files" . If you want the systemd service to stay active after executing the script, add the RemainAfterExit=yes line to the [Service] section: Reload the systemd daemon: Enable the systemd service: Create the script to execute: If you want the script to run during the boot only, and not on every boot, add a line that disables the systemd unit: Make the script executable: 24.6. Additional Resources For more information on automating system tasks on Red Hat Enterprise Linux, see the resources listed below. Installed Documentation cron - The manual page for the crond daemon documents how crond works and how to change its behavior. crontab - The manual page for the crontab utility provides a complete list of supported options. crontab (5) - This section of the manual page for the crontab utility documents the format of crontab files.
[ "~]# yum install cronie", "~]# systemctl enable crond.service", "~]# systemctl start crond.service", "0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * root /usr/local/bin/my-script.sh", "SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root For details see man 4 crontabs Example of job definition: .---------------- minute (0 - 59) | .------------- hour (0 - 23) | | .---------- day of month (1 - 31) | | | .------- month (1 - 12) OR jan,feb,mar,apr | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat | | | | | * * * * * user-name command to be executed", "[bob@localhost ~]USD crontab -e", "0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * bob /home/bob/bin/script.sh", "0,10,20,30,40,50 17-20 15 Jun,Jul,Aug * /home/bob/bin/script.sh", "[bob@localhost ~]USD crontab -l @daily /home/bob/bin/script.sh", "~]# rpm -q cronie-anacron", "~]# yum install cronie-anacron", "~]# systemctl enable crond.service", "~]# systemctl start crond.service", "Do not run jobs when on battery power online=1 for psupply in AC ADP0 ; do sysfile=\"/sys/class/power_supply/USDpsupply/online\" if [ -f USDsysfile ] ; then if [ `cat USDsysfile 2>/dev/null`x = 1x ]; then online=1 break else online=0 fi fi done", "3 60 cron.daily /usr/local/bin/my-script.sh", "~]# yum install at", "~]# systemctl enable atd.service", "~]# systemctl start atd.service", "~]# at time", "~]# at 15:00", "~]# at August 20 2017", "~]# at 082017", "~]# now + 5 days", "~]# at 15:00 at> sh /usr/local/bin/my-script.sh at>", "warning: commands will be executed using /bin/sh", "~]# atq 26 Thu Feb 23 15:00:00 2017 a root 28 Thu Feb 24 17:30:00 2017 a root", "job_number scheduled_date scheduled_hour job_class user_name", "~]# atq 26 Thu Feb 23 15:00:00 2017 a root 28 Thu Feb 24 17:30:00 2017 a root", "~]# atrm 26", "~]# batch", "~]# batch at> sh /usr/local/bin/my-script.sh", "warning: commands will be executed using /bin/sh", "OPTS='-l x '", "OPTS='-l 0.5'", "systemctl restart atq", "~]# cat /etc/systemd/system/one-time.service The script needs to execute after: network interfaces are configured Wants=network-online.target After=network-online.target all remote filesystems (NFS/_netdev) are mounted After=remote-fs.target name (DNS) and user resolution from remote databases (AD/LDAP) are available After=nss-user-lookup.target nss-lookup.target the system clock has synchronized After=time-sync.target [Service] Type=oneshot ExecStart=/usr/local/bin/foobar.sh [Install] WantedBy=multi-user.target", "[Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/local/bin/foobar.sh", "~]# systemctl daemon-reload", "~]# systemctl enable one-time.service", "~]# cat /usr/local/bin/foobar.sh #!/bin/bash touch /root/test_file", "#!/bin/bash touch /root/test_file systemctl disable one-time.service", "~]# chmod +x /usr/local/bin/foobar.sh" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-Automating_System_Tasks
2.3. Modifying Control Groups
2.3. Modifying Control Groups Each persistent unit supervised by systemd has a unit configuration file in the /usr/lib/systemd/system/ directory. To change parameters of a service unit, modify this configuration file. This can be done either manually or from the command-line interface by using the systemctl set-property command. 2.3.1. Setting Parameters from the Command-Line Interface The systemctl set-property command allows you to persistently change resource control settings during the application runtime. To do so, use the following syntax as root : Replace name with the name of the systemd unit you wish to modify, parameter with a name of the parameter to be changed, and value with a new value you want to assign to this parameter. Not all unit parameters can be changed at runtime, but most of those related to resource control may, see Section 2.3.2, "Modifying Unit Files" for a complete list. Note that systemctl set-property allows you to change multiple properties at once, which is preferable over setting them individually. The changes are applied instantly, and written into the unit file so that they are preserved after reboot. You can change this behavior by passing the --runtime option that makes your settings transient: Example 2.2. Using systemctl set-property To limit the CPU and memory usage of httpd.service from the command line, type: To make this a temporary change, add the --runtime option: 2.3.2. Modifying Unit Files Systemd service unit files provide a number of high-level configuration parameters useful for resource management. These parameters communicate with Linux cgroup controllers, that have to be enabled in the kernel. With these parameters, you can manage CPU, memory consumption, block IO, as well as some more fine-grained unit properties. Managing CPU The cpu controller is enabled by default in the kernel, and consequently every system service receives the same amount of CPU time, regardless of how many processes it contains. This default behavior can be changed with the DefaultControllers parameter in the /etc/systemd/system.conf configuration file. To manage CPU allocation, use the following directive in the [Service] section of the unit configuration file: CPUShares = value Replace value with a number of CPU shares. The default value is 1024. By increasing the number, you assign more CPU time to the unit. Setting the value of the CPUShares parameter automatically turns CPUAccounting on in the unit file. Users can thus monitor the usage of the processor with the systemd-cgtop command. The CPUShares parameter controls the cpu.shares control group parameter. See the description of the cpu controller in Controller-Specific Kernel Documentation to see other CPU-related control parameters. Example 2.3. Limiting CPU Consumption of a Unit To assign the Apache service 1500 CPU shares instead of the default 1024, create a new /etc/systemd/system/httpd.service.d/cpu.conf configuration file with the following content: To apply the changes, reload systemd's configuration and restart Apache so that the modified service file is taken into account: CPUQuota = value Replace value with a value of CPU time quota to assign the specified CPU time quota to the processes executed. The value of the CPUQuota parameter, which is expressed in percentage, specifies how much CPU time the unit gets at maximum, relative to the total CPU time available on one CPU. Values higher than 100% indicate that more than one CPU is used. CPUQuota controls the cpu.max attribute on the unified control group hierarchy, and the legacy cpu.cfs_quota_us attribute. Setting the value of the CPUQuota parameter automatically turns CPUAccounting on in the unit file. Users can thus monitor the usage of the processor with the systemd-cgtop command. Example 2.4. Using CPUQuota Setting CPUQuota to 20% ensures that the executed processes never get more than 20% CPU time on a single CPU. To assign the Apache service CPU quota of 20%, add the following content to the /etc/systemd/system/httpd.service.d/cpu.conf configuration file: To apply the changes, reload systemd's configuration and restart Apache so that the modified service file is taken into account: Managing Memory To enforce limits on the unit's memory consumption, use the following directives in the [Service] section of the unit configuration file: MemoryLimit = value Replace value with a limit on maximum memory usage of the processes executed in the cgroup. Use suffixes K , M , G , or T to identify Kilobyte, Megabyte, Gigabyte, or Terabyte as the unit of measurement. Also, the MemoryAccounting parameter has to be enabled for the unit. The MemoryLimit parameter controls the memory.limit_in_bytes control group parameter. For more information, see the description of the memory controller in Controller-Specific Kernel Documentation . Example 2.5. Limiting Memory Consumption of a Unit To assign a 1GB memory limit to the Apache service, modify the MemoryLimit setting in the /etc/systemd/system/httpd.service.d/cpu.conf unit file: To apply the changes, reload systemd's configuration and restart Apache so that the modified service file is taken into account: Managing Block IO To manage the Block IO, use the following directives in the [Service] section of the unit configuration file. Directives listed below assume that the BlockIOAccounting parameter is enabled: BlockIOWeight = value Replace value with a new overall block IO weight for the executed processes. Choose a single value between 10 and 1000, the default setting is 1000. BlockIODeviceWeight = device_name value Replace value with a block IO weight for a device specified with device_name . Replace device_name either with a name or with a path to a device. As with BlockIOWeight , it is possible to set a single weight value between 10 and 1000. BlockIOReadBandwidth = device_name value This directive allows you to limit a specific bandwidth for a unit. Replace device_name with the name of a device or with a path to a block device node, value stands for a bandwidth rate. Use suffixes K , M , G , or T to specify units of measurement. A value with no suffix is interpreted as bytes per second. BlockIOWriteBandwidth = device_name value Limits the write bandwidth for a specified device. Accepts the same arguments as BlockIOReadBandwidth . Each of the aforementioned directives controls a corresponding cgroup parameter. For other CPU-related control parameters, see the description of the blkio controller in Controller-Specific Kernel Documentation . Note Currently, the blkio resource controller does not support buffered write operations. It is primarily targeted at direct I/O, so the services that use buffered write will ignore the limits set with BlockIOWriteBandwidth . On the other hand, buffered read operations are supported, and BlockIOReadBandwidth limits will be applied correctly both on direct and buffered read. Example 2.6. Limiting Block IO of a Unit To lower the block IO weight for the Apache service accessing the /home/jdoe/ directory, add the following text into the /etc/systemd/system/httpd.service.d/cpu.conf unit file: To set the maximum bandwidth for Apache reading from the /var/log/ directory to 5MB per second, use the following syntax: To apply your changes, reload systemd's configuration and restart Apache so that the modified service file is taken into account: Managing Other System Resources There are several other directives that can be used in the unit file to facilitate resource management: DeviceAllow = device_name options This option controls access to specific device nodes. Here, device_name stands for a path to a device node or a device group name as specified in /proc/devices . Replace options with a combination of r , w , and m to allow the unit to read, write, or create device nodes. DevicePolicy = value Here, value is one of: strict (only allows the types of access explicitly specified with DeviceAllow ), closed (allows access to standard pseudo devices including /dev/null, /dev/zero, /dev/full, /dev/random, and /dev/urandom) or auto (allows access to all devices if no explicit DeviceAllow is present, which is the default behavior) Slice = slice_name Replace slice_name with the name of the slice to place the unit in. The default is system.slice . Scope units cannot be arranged in this way, since they are tied to their parent slices. ExecStartPost = command Currently, systemd supports only a subset of cgroup features. However, as a workaround, you can use the ExecStartPost= option along with setting the memory.memsw.limit_in_bytes parameter in order to prevent any swap usage for a service. For more information on ExecStartPost= , see the systemd.service(5) man page. Example 2.7. Configuring Cgroup Options Imagine that you wish to change the memory.memsw.limit_in_bytes setting to the same value as the unit's MemoryLimit = in order to prevent any swap usage for a given example service. To apply the change, reload systemd configuration and restart the service so that the modified setting is taken into account:
[ "~]# systemctl set-property name parameter = value", "~]# systemctl set-property --runtime name property = value", "~]# systemctl set-property httpd.service CPUShares=600 MemoryLimit=500M", "~]# systemctl set-property --runtime httpd.service CPUShares=600 MemoryLimit=500M", "[Service] CPUShares=1500", "~]# systemctl daemon-reload ~]# systemctl restart httpd.service", "[Service] CPUQuota=20%", "~]# systemctl daemon-reload ~]# systemctl restart httpd.service", "[Service] MemoryLimit=1G", "~]# systemctl daemon-reload ~]# systemctl restart httpd.service", "[Service] BlockIODeviceWeight=/home/jdoe 750", "[Service] BlockIOReadBandwidth=/var/log 5M", "~]# systemctl daemon-reload ~]# systemctl restart httpd.service", "ExecStartPost=/bin/bash -c \"echo 1G > /sys/fs/cgroup/memory/system.slice/ example .service/memory.memsw.limit_in_bytes\"", "~]# systemctl daemon-reload ~]# systemctl restart example .service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/resource_management_guide/sec-Modifying_Control_Groups
Chapter 6. Cloud integrations reference material
Chapter 6. Cloud integrations reference material See the following resources for more information about using your integrations with services in the Red Hat Hybrid Cloud Console. Cost management Getting started with cost management RHEL management bundle Option 3: Advanced RHEL management Understanding gold images Getting started with the Subscriptions Service Launch images Deploying and managing RHEL systems in hybrid clouds Configuring integrations to launch RHEL images
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_cloud_integrations_for_red_hat_services/integrations-reference-material_crc-cloud-integrations
Backup and restore
Backup and restore OpenShift Container Platform 4.18 Backing up and restoring your OpenShift Container Platform cluster Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/backup_and_restore/index
Chapter 8. Post-update configuration changes for clusters backed by local storage
Chapter 8. Post-update configuration changes for clusters backed by local storage In Red Hat OpenShift Container Platform 4.6 and onward, the Local Storage Operator provides new custom resource types for managing local storage: LocalVolumeDiscovery LocalVolumeSet If you have incrementally upgraded to OpenShift Data Foundation version 4.9 from version 4.5 or earlier with Local Storage Operator, and these resources have not yet been created, additional configuration steps are required after an update to ensure that all features work as expected. These resource types are not automatically handled as part of an update from 4.5, and must be created manually. See Post-update configuration changes for clusters backed by local storage for instructions on creating the resources. Note If you had already created these resources after upgrading from 4.5, then you do not need to create them after upgrading to 4.9. 8.1. Adding annotations You need to add annotations to the storage cluster to enable replacing of failed storage devices through the user interface when you upgraded to OpenShift Data Foundation version 4.9 from a version. Procedure Log in to the OpenShift Container Platform Web Console. Click Home Search . Search for StorageCluster in Resources and click on it. Beside ocs-storagecluster , click Action menu (...) Edit annotations . Add cluster.ocs.openshift.io/local-devices and true for KEY and VALUE respectively. Click Save .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/upgrading_to_openshift_data_foundation/post-update-configuration-changes-for-clusters-backed-by-local-storage_rhodf
Chapter 8. Uninstalling a cluster on IBM Power Virtual Server
Chapter 8. Uninstalling a cluster on IBM Power Virtual Server You can remove a cluster that you deployed to IBM Power(R) Virtual Server. 8.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. You have configured the ccoctl binary. You have installed the IBM Cloud(R) CLI and installed or updated the VPC infrastructure service plugin. For more information see "Prerequisites" in the IBM Cloud(R) CLI documentation . Procedure If the following conditions are met, this step is required: The installer created a resource group as part of the installation process. You or one of your applications created persistent volume claims (PVCs) after the cluster was deployed. In which case, the PVCs are not removed when uninstalling the cluster, which might prevent the resource group from being successfully removed. To prevent a failure: Log in to the IBM Cloud(R) using the CLI. To list the PVCs, run the following command: USD ibmcloud is volumes --resource-group-name <infrastructure_id> For more information about listing volumes, see the IBM Cloud(R) CLI documentation . To delete the PVCs, run the following command: USD ibmcloud is volume-delete --force <volume_id> For more information about deleting volumes, see the IBM Cloud(R) CLI documentation . Export the API key that was created as part of the installation process. USD export IBMCLOUD_API_KEY=<api_key> Note You must set the variable name exactly as specified. The installation program expects the variable name to be present to remove the service IDs that were created when the cluster was installed. From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. You might have to run the openshift-install destroy command up to three times to ensure a proper cleanup. Remove the manual CCO credentials that were created for the cluster: USD ccoctl ibmcloud delete-service-id \ --credentials-requests-dir <path_to_credential_requests_directory> \ --name <cluster_name> Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "ibmcloud is volumes --resource-group-name <infrastructure_id>", "ibmcloud is volume-delete --force <volume_id>", "export IBMCLOUD_API_KEY=<api_key>", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl ibmcloud delete-service-id --credentials-requests-dir <path_to_credential_requests_directory> --name <cluster_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_power_virtual_server/uninstalling-cluster-ibm-power-vs
Chapter 2. Building Your Application on JBoss EAP
Chapter 2. Building Your Application on JBoss EAP 2.1. Overview The following example demonstrates the use of camel-cdi component with Red Hat Fuse on EAP to integrate CDI beans with Camel routes. In this example, a Camel route takes a message payload from a servlet HTTP GET request and passes it on to a direct endpoint. It then passes the payload onto a Camel CDI bean invocation to produce a message response and displays the output on the web browser page. 2.2. Running the Project Before running the project, ensure that your setup includes Maven and the application server with Red Hat Fuse. Note If you are using Java 17, you must enable the JBoss EAP Elytron Subsystem before you start the application, by using the JBoss EAP Elytron Subsystem . For Linux: USD{JBOSS_HOME}/bin/jboss-cli.sh --file=docs/examples/enable-elytron-se17.cli -Dconfig=standalone-full.xml For Windows: %JBOSS_HOME%\bin\jboss-cli.bat --file=docs\examples\enable-elytron-se17.cli -Dconfig=standalone-full.xml Perform the following steps to run your project: Start the application server in standalone mode: For Linux: USD{JBOSS_HOME}/bin/standalone.sh -c standalone-full.xml For Windows: %JBOSS_HOME%\bin\standalone.bat -c standalone-full.xml Build and deploy the project: mvn install -Pdeploy Now, browse to http://localhost:8080/example-camel-cdi/?name=World location. The following message Hello World from 127.0.0.1 appears as an output on the web page. Also, you can view the Camel Route under the MyRouteBuilder.java class as: The bean DSL makes Camel look for a bean named helloBean in the bean registry. Also, the bean is available to Camel due to the SomeBean class. By using the @Named annotation, the camel-cdi adds the bean to the Camel bean registry. For more information, see USD EAP_HOME/quickstarts/camel/camel-cdi directory. 2.3. BOM file for JBoss EAP The purpose of a Maven Bill of Materials (BOM) file is to provide a curated set of Maven dependency versions that work well together, saving you from having to define versions individually for every Maven artifact. The Fuse BOM for JBoss EAP offers the following advantages: Defines versions for Maven dependencies, so that you do not need to specify the version when you add a dependency to your POM. Defines a set of curated dependencies that are fully tested and supported for a specific version of Fuse. Simplifies upgrades of Fuse. Important Only the set of dependencies defined by a Fuse BOM are supported by Red Hat. To incorporate a BOM file into your Maven project, specify a dependencyManagement element in your project's pom.xml file (or, possibly, in a parent POM file), as shown in the following example: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <project ...> ... <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>fuse-eap-bom</artifactId> <version>USD{fuse.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> ... </project> After specifying the BOM using the dependency management mechanism, it becomes possible to add Maven dependencies to your POM without specifying the version of the artifact. For example, to add a dependency for the camel-velocity component, you would add the following XML fragment to the dependencies element in your POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-velocity</artifactId> <scope>provided</scope> </dependency> Note how the version element is omitted from this dependency definition.
[ "from(\"direct:start\").bean(\"helloBean\");", "@Named(\"helloBean\") public class SomeBean { public String someMethod(String name) throws Exception { return String.format(\"Hello %s from %s\", name, InetAddress.getLocalHost().getHostAddress()); } }", "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?> <project ...> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <!-- configure the versions you want to use here --> <fuse.version>7.13.0.fuse-7_13_0-00012-redhat-00001</fuse.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.redhat-fuse</groupId> <artifactId>fuse-eap-bom</artifactId> <version>USD{fuse.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> </project>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-velocity</artifactId> <scope>provided</scope> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_jboss_eap/building_your_application_on_jboss_eap
Validation and troubleshooting
Validation and troubleshooting OpenShift Container Platform 4.12 Validating and troubleshooting an OpenShift Container Platform installation Red Hat OpenShift Documentation Team
[ "cat <install_dir>/.openshift_install.log", "time=\"2020-12-03T09:50:47Z\" level=info msg=\"Install complete!\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Login to the console with user: \\\"kubeadmin\\\", and password: \\\"password\\\"\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Time elapsed per stage:\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Infrastructure: 6m45s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\"Bootstrap Complete: 11m30s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Bootstrap Destroy: 1m5s\" time=\"2020-12-03T09:50:47Z\" level=debug msg=\" Cluster Operators: 17m31s\" time=\"2020-12-03T09:50:47Z\" level=info msg=\"Time elapsed: 37m26s\"", "oc adm node-logs <node_name> -u crio", "Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1366]: time=\"2021-08-05 10:33:21.594930907Z\" level=info msg=\"Pulling image: quay.io/openshift-release-dev/ocp-release:4.10.0-ppc64le\" id=abcd713b-d0e1-4844-ac1c-474c5b60c07c name=/runtime.v1alpha2.ImageService/PullImage Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.194341109Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\" Mar 17 02:52:50 ip-10-0-138-140.ec2.internal crio[1484]: time=\"2021-03-17 02:52:50.226788351Z\" level=info msg=\"Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"\"", "Trying to access \\\"li0317gcp1.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\" Trying to access \\\"li0317gcp2.mirror-registry.qe.gcp.devcluster.openshift.com:5000/ocp/release@sha256:1926eae7cacb9c00f142ec98b00628970e974284b6ddaf9a6a086cb9af7a6c31\\\"", "oc get clusterversion", "NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.4 True False 6m25s Cluster version is 4.6.4", "oc get clusteroperators.config.openshift.io", "oc describe clusterversion", "oc get clusterversion -o jsonpath='{.items[0].spec}{\"\\n\"}'", "{\"channel\":\"stable-4.6\",\"clusterID\":\"245539c1-72a3-41aa-9cec-72ed8cf25c5c\"}", "oc adm upgrade", "Cluster version is 4.6.4 Updates: VERSION IMAGE 4.6.6 quay.io/openshift-release-dev/ocp-release@sha256:c7e8f18e8116356701bd23ae3a23fb9892dd5ea66c8300662ef30563d7104f39", "oc get nodes", "NAME STATUS ROLES AGE VERSION compute-1.example.com Ready worker 33m v1.25.0 control-plane-1.example.com Ready master 41m v1.25.0 control-plane-2.example.com Ready master 45m v1.25.0 compute-2.example.com Ready worker 38m v1.25.0 compute-3.example.com Ready worker 33m v1.25.0 control-plane-3.example.com Ready master 41m v1.25.0", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% compute-1.example.com 128m 8% 1132Mi 16% control-plane-1.example.com 801m 22% 3471Mi 23% control-plane-2.example.com 1718m 49% 6085Mi 40% compute-2.example.com 935m 62% 5178Mi 75% compute-3.example.com 111m 7% 1131Mi 16% control-plane-3.example.com 942m 26% 4100Mi 27%", "./openshift-install gather bootstrap --dir <installation_directory> 1", "./openshift-install gather bootstrap --dir <installation_directory> \\ 1 --bootstrap <bootstrap_address> \\ 2 --master <master_1_address> \\ 3 --master <master_2_address> \\ 4 --master <master_3_address>\" 5", "INFO Pulling debug logs from the bootstrap machine INFO Bootstrap gather logs captured here \"<installation_directory>/log-bundle-<timestamp>.tar.gz\"", "journalctl -b -f -u bootkube.service", "for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done", "tail -f /var/lib/containers/storage/overlay-containers/*/userdata/ctr.log", "journalctl -b -f -u kubelet.service -u crio.service", "sudo tail -f /var/log/containers/*", "oc adm node-logs --role=master -u kubelet", "oc adm node-logs --role=master --path=openshift-apiserver", "cat ~/<installation_directory>/.openshift_install.log 1", "./openshift-install create cluster --dir <installation_directory> --log-level debug 1", "./openshift-install destroy cluster --dir <installation_directory> 1", "rm -rf <installation_directory>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/validation_and_troubleshooting/index
Use Red Hat Quay
Use Red Hat Quay Red Hat Quay 3.13 Use Red Hat Quay Red Hat OpenShift Documentation Team
[ "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"username\": \"newuser\", \"email\": \"[email protected]\" }' \"https://<quay-server.example.com>/api/v1/superuser/users/\"", "{\"username\": \"newuser\", \"email\": \"[email protected]\", \"password\": \"123456789\", \"encrypted_password\": \"<example_encrypted_password>/JKY9pnDcsw=\"}", "podman login <quay-server.example.com>", "username: newuser password: 123456789", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\"", "{\"users\": [{\"kind\": \"user\", \"name\": \"quayadmin\", \"username\": \"quayadmin\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}, \"super_user\": true, \"enabled\": true}, {\"kind\": \"user\", \"name\": \"newuser\", \"username\": \"newuser\", \"email\": \"[email protected]\", \"verified\": true, \"avatar\": {\"name\": \"newuser\", \"hash\": \"f338a2c83bfdde84abe2d3348994d70c34185a234cfbf32f9e323e3578e7e771\", \"color\": \"#9edae5\", \"kind\": \"user\"}, \"super_user\": false, \"enabled\": true}]}", "curl -X DELETE -H \"Authorization: Bearer <insert token here>\" https://<quay-server.example.com>/api/v1/superuser/users/<username>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/superuser/users/\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<new_organization_name>\" }' \"https://<quay-server.example.com>/api/v1/organization/\"", "\"Created\"", "curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"email\": \"<org_email>\", \"invoice_email\": <true/false>, \"invoice_email_address\": \"<billing_email>\" }'", "{\"name\": \"test\", \"email\": \"[email protected]\", \"avatar\": {\"name\": \"test\", \"hash\": \"a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe\", \"color\": \"#aec7e8\", \"kind\": \"user\"}, \"is_admin\": true, \"is_member\": true, \"teams\": {\"owners\": {\"name\": \"owners\", \"description\": \"\", \"role\": \"admin\", \"avatar\": {\"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\"}, \"can_view\": true, \"repo_count\": 0, \"member_count\": 1, \"is_synced\": false}}, \"ordered_teams\": [\"owners\"], \"invoice_email\": true, \"invoice_email_address\": \"[email protected]\", \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"quotas\": [{\"id\": 2, \"limit_bytes\": 10737418240, \"limits\": [{\"id\": 1, \"type\": \"Reject\", \"limit_percent\": 90}]}], \"quota_report\": {\"quota_bytes\": 0, \"configured_quota\": 10737418240, \"running_backfill\": \"complete\", \"backfill_status\": \"complete\"}}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"", "{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://<quay-server.example.com>/api/v1/error/not_found\", \"status\": 404}", "sudo podman pull busybox", "Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9", "sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test", "sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test", "Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<private>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"", "{\"namespace\": \"quayadmin\", \"name\": \"<new_repository_name>\", \"kind\": \"image\"}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"", "{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://quay-server.example.com/api/v1/error/not_found\", \"status\": 404}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_name>\"", "{\"name\": \"orgname+robot-name\", \"created\": \"Fri, 10 May 2024 15:11:00 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/user/robots/<robot_name>\"", "{\"name\": \"quayadmin+robot-name\", \"created\": \"Fri, 10 May 2024 15:24:57 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\", \"unstructured_metadata\": null}", "ROBOTS_DISALLOW: true", "podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" <quay-server.example.com>", "Error: logging into \"<quay-server.example.com>\": invalid username/password", "podman login -u=\"<organization-name/username>+<robot-name>\" -p=\"KETJ6VN0WT8YLLNXUJJ4454ZI6TZJ98NV41OE02PC2IQXVXRFQ1EJ36V12345678\" --log-level=debug <quay-server.example.com>", "DEBU[0000] error logging into \"quay-server.example.com\": unable to retrieve auth token: invalid username/password: unauthorized: Robot accounts have been disabled. Please contact your administrator.", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<orgname>/robots/<robot_shortname>/regenerate\"", "{\"name\": \"test-org+test\", \"created\": \"Fri, 10 May 2024 17:46:02 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>/regenerate\"", "{\"name\": \"quayadmin+test\", \"created\": \"Fri, 10 May 2024 14:12:11 -0000\", \"last_accessed\": null, \"description\": \"\", \"token\": \"<example_secret>\"}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/robots/<robot_shortname>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>/robots\"", "{\"robots\": []}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/user/robots/<robot_shortname>\"", "{\"message\":\"Could not find robot with specified username\"}", "http://localhost:8080/realms/master/protocol/openid-connect/token", "http://<keycloak_url>/realms/<realm_name>/protocol/openid-connect/auth?response_type=code&client_id=<client_id>", "https://localhost:3000/cb?session_state=5c9bce22-6b85-4654-b716-e9bbb3e755bc&iss=http%3A%2F%2Flocalhost%3A8080%2Frealms%2Fmaster&code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43", "code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43", "curl -X POST \"http://localhost:8080/realms/master/protocol/openid-connect/token\" 1 -H \"Content-Type: application/x-www-form-urlencoded\" -d \"client_id=quaydev\" 2 -d \"client_secret=g8gPsBLxVrLo2PjmZkYBdKvcB9C7fmBz\" 3 -d \"grant_type=authorization_code\" -d \"code=ea5b76eb-47a5-4e5d-8f71-0892178250db.5c9bce22-6b85-4654-b716-e9bbb3e755bc.cdffafbc-20fb-42b9-b254-866017057f43\" 4", "{\"access_token\":\"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0...\", \"expires_in\":60,\"refresh_expires_in\":1800,\"refresh_token\":\"eyJhbGciOiJIUzUxMiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJiNTBlZTVkMS05OTc1LTQwMzUtYjNkNy1lMWQ5ZTJmMjg0MTEifQ.oBDx6B3pUkXQO8m-M3hYE7v-w25ak6y70CQd5J8f5EuldhvTwpWrC1K7yOglvs09dQxtq8ont12rKIoCIi4WXw\",\"token_type\":\"Bearer\",\"not-before-policy\":0,\"session_state\":\"5c9bce22-6b85-4654-b716-e9bbb3e755bc\",\"scope\":\"profile email\"}", "import requests import os TOKEN=os.environ.get('TOKEN') robot_user = \"fed-test+robot1\" def get_quay_robot_token(fed_token): URL = \"https://<quay-server.example.com>/oauth2/federation/robot/token\" response = requests.get(URL, auth=(robot_user,fed_token)) 1 print(response) print(response.text) if __name__ == \"__main__\": get_quay_robot_token(TOKEN)", "export TOKEN = eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJTVmExVHZ6eDd2cHVmc1dkZmc1SHdua1ZDcVlOM01DN1N5T016R0QwVGhVIn0", "python3 robot_fed_token_auth.py", "<Response [200]> {\"token\": \"291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ...\"}", "export QUAY_TOKEN=291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6InByb2ZpbGUgZW1haWwiLCJlbWFpbF92ZXJpZ", "podman login <quay-server.example.com> -u fed_test+robot1 -p USDQUAY_TOKEN", "podman pull <quay-server.example.com/<repository_name>/<image_name>>", "Getting image source signatures Copying blob 900e6061671b done Copying config 8135583d97 done Writing manifest to image destination Storing signatures 8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d 0.57user 0.11system 0:00.99elapsed 68%CPU (0avgtext+0avgdata 78716maxresident)k 800inputs+15424outputs (18major+6528minor)pagefaults 0swaps", "podman pull <quay-server.example.com/<different_repository_name>/<image_name>>", "Error: initializing source docker://quay-server.example.com/example_repository/busybox:latest: reading manifest in quay-server.example.com/example_repository/busybox: unauthorized: access to the requested resource is not authorized", "curl -k -X PUT -H 'Accept: application/json' -H 'Content-Type: application/json' -H \"Authorization: Bearer <bearer_token>\" --data '{\"role\": \"creator\"}' https://<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>", "{\"name\": \"example_team\", \"description\": \"\", \"can_view\": true, \"role\": \"creator\", \"avatar\": {\"name\": \"example_team\", \"hash\": \"dec209fd7312a2284b689d4db3135e2846f27e0f40fa126776a0ce17366bc989\", \"color\": \"#e7ba52\", \"kind\": \"team\"}, \"new_team\": true}", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"", "{\"name\": \"testuser\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"d51d17303dc3271ac3266fb332d7df919bab882bbfc7199d2017a4daac8979f0\", \"color\": \"#5254a3\", \"kind\": \"user\"}, \"invited\": false}", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members/<member_name>\"", "curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/members\"", "{\"name\": \"owners\", \"members\": [{\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}, \"invited\": false}, {\"name\": \"test-org+test\", \"kind\": \"user\", \"is_robot\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}, \"invited\": false}], \"can_edit\": true}", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/invite/<email>\"", "curl -X GET -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>/permissions\"", "{\"permissions\": [{\"repository\": {\"name\": \"api-repo\", \"is_public\": true}, \"role\": \"admin\"}]}", "curl -X PUT -H \"Authorization: Bearer <your_access_token>\" -H \"Content-Type: application/json\" -d '{ \"role\": \"<role>\" }' \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"", "{\"name\": \"testteam\", \"description\": \"\", \"can_view\": true, \"role\": \"creator\", \"avatar\": {\"name\": \"testteam\", \"hash\": \"827f8c5762148d7e85402495b126e0a18b9b168170416ed04b49aae551099dc8\", \"color\": \"#ff7f0e\", \"kind\": \"team\"}, \"new_team\": false}", "curl -X DELETE -H \"Authorization: Bearer <your_access_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>/team/<team_name>\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"<admin_read_or_write>\", \"delegate\": { \"name\": \"<username>\", \"kind\": \"user\" }, \"activating_user\": { \"name\": \"<robot_name>\" } }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes", "{\"activating_user\": {\"name\": \"test-org+test\", \"is_robot\": true, \"kind\": \"user\", \"is_org_member\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}, \"delegate\": {\"name\": \"testuser\", \"is_robot\": false, \"kind\": \"user\", \"is_org_member\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}}, \"role\": \"admin\", \"id\": \"977dc2bc-bc75-411d-82b3-604e5b79a493\"}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"role\": \"write\" }' https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototypeid>", "{\"activating_user\": {\"name\": \"test-org+test\", \"is_robot\": true, \"kind\": \"user\", \"is_org_member\": true, \"avatar\": {\"name\": \"test-org+test\", \"hash\": \"aa85264436fe9839e7160bf349100a9b71403a5e9ec684d5b5e9571f6c821370\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}, \"delegate\": {\"name\": \"testuser\", \"is_robot\": false, \"kind\": \"user\", \"is_org_member\": false, \"avatar\": {\"name\": \"testuser\", \"hash\": \"f660ab912ec121d1b1e928a0bb4bc61b15f5ad44d5efdc4e1c92a25e99b8e44a\", \"color\": \"#6b6ecf\", \"kind\": \"user\"}}, \"role\": \"write\", \"id\": \"977dc2bc-bc75-411d-82b3-604e5b79a493\"}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes/<prototype_id>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/organization/<organization_name>/prototypes", "{\"prototypes\": []}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{\"role\": \"admin\"}' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>", "{\"role\": \"admin\", \"name\": \"quayadmin+test\", \"is_robot\": true, \"avatar\": {\"name\": \"quayadmin+test\", \"hash\": \"ca9afae0a9d3ca322fc8a7a866e8476dd6c98de543decd186ae090e420a88feb\", \"color\": \"#8c564b\", \"kind\": \"robot\"}}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/permissions/user/<username>/", "{\"message\":\"User does not have permission for repo.\"}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>?includeTags=true", "{\"namespace\": \"quayadmin\", \"name\": \"busybox\", \"kind\": \"image\", \"description\": null, \"is_public\": false, \"is_organization\": false, \"is_starred\": false, \"status_token\": \"d8f5e074-690a-46d7-83c8-8d4e3d3d0715\", \"trust_enabled\": false, \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"state\": \"NORMAL\", \"tags\": {\"example\": {\"name\": \"example\", \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}, \"test\": {\"name\": \"test\", \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}}, \"can_write\": true, \"can_admin\": true}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/", "{\"tags\": [{\"name\": \"test-two\", \"reversion\": true, \"start_ts\": 1718737153, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1718737029, \"end_ts\": 1718737153, \"manifest_digest\": \"sha256:0cd3dd6236e246b349e63f76ce5f150e7cd5dbf2f2f1f88dbd734430418dbaea\", \"is_manifest_list\": false, \"size\": 2275317, \"last_modified\": \"Tue, 18 Jun 2024 18:57:09 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1718737018, \"end_ts\": 1718737029, \"manifest_digest\": \"sha256:0cd3dd6236e246b349e63f76ce5f150e7cd5dbf2f2f1f88dbd734430418dbaea\", \"is_manifest_list\": false, \"size\": 2275317, \"last_modified\": \"Tue, 18 Jun 2024 18:56:58 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:57:09 -0000\"}, {\"name\": \"sample_tag\", \"reversion\": false, \"start_ts\": 1718736147, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:42:27 -0000\"}, {\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1717680780, \"end_ts\": 1718737018, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:33:00 -0000\", \"expiration\": \"Tue, 18 Jun 2024 18:56:58 -0000\"}, {\"name\": \"tag-test\", \"reversion\": false, \"start_ts\": 1717680378, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:26:18 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}], \"page\": 1, \"has_additional\": false}", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>", "\"Updated\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore", "{}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag", "{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels", "{\"labels\": [{\"id\": \"e9f717d2-c1dd-4626-802d-733a029d17ad\", \"key\": \"org.opencontainers.image.url\", \"value\": \"https://github.com/docker-library/busybox\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}, {\"id\": \"2d34ec64-4051-43ad-ae06-d5f81003576a\", \"key\": \"org.opencontainers.image.version\", \"value\": \"1.36.1-glibc\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}]}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<label_id>", "{\"id\": \"e9f717d2-c1dd-4626-802d-733a029d17ad\", \"key\": \"org.opencontainers.image.url\", \"value\": \"https://github.com/docker-library/busybox\", \"source_type\": \"manifest\", \"media_type\": \"text/plain\"}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"key\": \"<key>\", \"value\": \"<value>\", \"media_type\": \"<media_type>\" }' https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels", "{\"label\": {\"id\": \"346593fd-18c8-49db-854f-4cb1fb76ff9c\", \"key\": \"example-key\", \"value\": \"example-value\", \"source_type\": \"api\", \"media_type\": \"text/plain\"}}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/manifest/<manifestref>/labels/<labelid>", "docker label quay.expires-after=20h quay-server.example.com/quayadmin/<image>:<tag>", "curl -X PUT -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": \"<manifest_digest>\" }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>", "\"Updated\"", "podman pull quay-server.example.com/quayadmin/busybox:test2", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository>/tag/?onlyActiveTags=true&page=1&limit=10\"", "{\"tags\": [{\"name\": \"test-two\", \"reversion\": false, \"start_ts\": 1717680780, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:33:00 -0000\"}, {\"name\": \"tag-test\", \"reversion\": false, \"start_ts\": 1717680378, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Thu, 06 Jun 2024 13:26:18 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}], \"page\": 1, \"has_additional\": false}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/quayadmin/busybox/tag/?onlyActiveTags=true&page=1&limit=20&specificTag=test-two\"", "{\"tags\": [{\"name\": \"test-two\", \"reversion\": true, \"start_ts\": 1718737153, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 18 Jun 2024 18:59:13 -0000\"}], \"page\": 1, \"has_additional\": false}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag/<tag>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag", "{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"manifest_digest\": <manifest_digest> }' quay-server.example.com/api/v1/repository/quayadmin/busybox/tag/test/restore", "{}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/tag", "{\"tags\": [{\"name\": \"test\", \"reversion\": false, \"start_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"example\", \"reversion\": false, \"start_ts\": 1715697708, \"end_ts\": 1715698131, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:41:48 -0000\", \"expiration\": \"Tue, 14 May 2024 14:48:51 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715695488, \"end_ts\": 1716324069, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Tue, 14 May 2024 14:04:48 -0000\", \"expiration\": \"Tue, 21 May 2024 20:41:09 -0000\"}, {\"name\": \"test\", \"reversion\": false, \"start_ts\": 1715631517, \"end_ts\": 1715695488, \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\", \"is_manifest_list\": false, \"size\": 2275314, \"last_modified\": \"Mon, 13 May 2024 20:18:37 -0000\", \"expiration\": \"Tue, 14 May 2024 14:04:48 -0000\"}], \"page\": 1, \"has_additional\": false}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://<quay-server.example.com>/api/v1/user/aggregatelogs\"", "{\"aggregated\": [{\"kind\": \"create_tag\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"manifest_label_add\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"push_repo\", \"count\": 2, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}, {\"kind\": \"revert_tag\", \"count\": 1, \"datetime\": \"Tue, 18 Jun 2024 00:00:00 -0000\"}]}", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/aggregatelogs?performer=<username>&starttime=<MM/DD/YYYY>&endtime=<MM/DD/YYYY>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/organization/{orgname}/aggregatelogs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/repository/<repository_name>/<namespace>/aggregatelogs?starttime=2024-01-01&endtime=2024-06-18\"\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"<quay-server.example.com>/api/v1/user/logs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://quay-server.example.com/api/v1/user/logs?performer=quayuser&starttime=01/01/2024&endtime=06/18/2024\"", "--- {\"start_time\": \"Mon, 01 Jan 2024 00:00:00 -0000\", \"end_time\": \"Wed, 19 Jun 2024 00:00:00 -0000\", \"logs\": [{\"kind\": \"revert_tag\", \"metadata\": {\"username\": \"quayuser\", \"repo\": \"busybox\", \"tag\": \"test-two\", \"manifest_digest\": \"sha256:57583a1b9c0a7509d3417387b4f43acf80d08cdcf5266ac87987be3f8f919d5d\"}, \"ip\": \"192.168.1.131\", \"datetime\": \"Tue, 18 Jun 2024 18:59:13 -0000\", \"performer\": {\"kind\": \"user\", \"name\": \"quayuser\", \"is_robot\": false, \"avatar\": {\"name\": \"quayuser\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, {\"kind\": \"push_repo\", \"metadata\": {\"repo\": \"busybox\", \"namespace\": \"quayuser\", \"user-agent\": \"containers/5.30.1 (github.com/containers/image)\", \"tag\": \"test-two\", \"username\": \"quayuser\", } ---", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/organization/{orgname}/logs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"http://<quay-server.example.com>/api/v1/repository/{repository}/logs\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/user/exportlogs\"", "{\"export_id\": \"6a0b9ea9-444c-4a19-9db8-113201c38cd4\"}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"<MM/DD/YYYY>\", \"endtime\": \"<MM/DD/YYYY>\", \"callback_email\": \"[email protected]\" }' \"http://<quay-server.example.com>/api/v1/organization/{orgname}/exportlogs\"", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -H \"Accept: application/json\" -d '{ \"starttime\": \"2024-01-01\", \"endtime\": \"2024-06-18\", \"callback_url\": \"http://your-callback-url.example.com\" }' \"http://<quay-server.example.com>/api/v1/repository/{repository}/exportlogs\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" \"https://quay-server.example.com/api/v1/repository/<namespace>/<repository>/manifest/<manifest_digest>/security?vulnerabilities=<true_or_false>\"", "{\"status\": \"queued\", \"data\": null}", "NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300 1", "Test Notification Queued A test version of this notification has been queued and should appear shortly", "{ \"repository\": \"sample_org/busybox\", \"namespace\": \"sample_org\", \"name\": \"busybox\", \"docker_url\": \"quay-server.example.com/sample_org/busybox\", \"homepage\": \"http://quay-server.example.com/repository/sample_org/busybox\", \"tags\": [ \"latest\", \"v1\" ], \"expiring_in\": \"1 days\" }", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/", "{\"uuid\": \"240662ea-597b-499d-98bb-2b57e73408d6\", \"title\": null, \"event\": \"repo_push\", \"method\": \"quay_notification\", \"config\": {\"target\": {\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, \"event_config\": {}, \"number_of_failures\": 0}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test", "{}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification", "{\"notifications\": []}", "{ \"name\": \"repository\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"updated_tags\": [ \"latest\" ] }", "{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"repo\": \"test\", \"trigger_metadata\": { \"default_branch\": \"master\", \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional }, \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" } } }, \"is_manual\": false, \"manual_user\": null, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\" }", "{ \"build_id\": \"a8cc247a-a662-4fee-8dcb-7d7e822b71ba\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"50bc599\", \"trigger_metadata\": { //Optional \"commit\": \"50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"date\": \"2019-03-06T14:10:14+11:00\", \"message\": \"test build\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/a8cc247a-a662-4fee-8dcb-7d7e822b71ba\" }", "{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"b7f7d2b\", \"image_id\": \"sha256:0339f178f26ae24930e9ad32751d6839015109eabdf1c25b3b0f2abf8934f6cb\", \"trigger_metadata\": { \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\", \"manifest_digests\": [ \"quay.io/dgangaia/test@sha256:2a7af5265344cc3704d5d47c4604b1efcbd227a7a6a6ff73d6e4e08a27fd7d99\", \"quay.io/dgangaia/test@sha256:569e7db1a867069835e8e97d50c96eccafde65f08ea3e0d5debaf16e2545d9d1\" ] }", "{ \"build_id\": \"5346a21d-3434-4764-85be-5be1296f293c\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"docker_url\": \"quay.io/dgangaia/test\", \"error_message\": \"Could not find or parse Dockerfile: unknown instruction: GIT\", \"namespace\": \"dgangaia\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"6ae9a86\", \"trigger_metadata\": { //Optional \"commit\": \"6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"date\": \"2019-03-06T14:18:16+11:00\", \"message\": \"failed build test\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/5346a21d-3434-4764-85be-5be1296f293c\" }", "{ \"build_id\": \"cbd534c5-f1c0-4816-b4e3-55446b851e70\", \"trigger_kind\": \"github\", \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"cbce83c\", \"trigger_metadata\": { \"commit\": \"cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { \"url\": \"https://github.com/dgangaia/test/commit/cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"date\": \"2019-03-06T14:27:53+11:00\", \"message\": \"testing cancel build\", \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" }, \"author\": { \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/cbd534c5-f1c0-4816-b4e3-55446b851e70\" }", "{ \"repository\": \"dgangaia/repository\", \"namespace\": \"dgangaia\", \"name\": \"repository\", \"docker_url\": \"quay.io/dgangaia/repository\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"tags\": [\"latest\", \"othertag\"], \"vulnerability\": { \"id\": \"CVE-1234-5678\", \"description\": \"This is a bad vulnerability\", \"link\": \"http://url/to/vuln/info\", \"priority\": \"Critical\", \"has_fix\": true } }", "**Default:** `False`", "FEATURE_QUOTA_MANAGEMENT: true FEATURE_GARBAGE_COLLECTION: true PERMANENTLY_DELETE_TAGS: true QUOTA_TOTAL_DELAY_SECONDS: 1800 RESET_CHILD_MANIFEST_EXPIRATION: true", "curl -X POST \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": 10737418240, \"limits\": \"10 Gi\" }'", "\"Created\"", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq", "[{\"id\": 1, \"limit_bytes\": 10737418240, \"limit\": \"10.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}]", "curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>/quota/<quota_id>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"limit_bytes\": <limit_in_bytes> }'", "{\"id\": 1, \"limit_bytes\": 21474836480, \"limit\": \"20.0 GiB\", \"default_config\": false, \"limits\": [], \"default_config_exists\": false}", "podman pull ubuntu:18.04 podman tag docker.io/library/ubuntu:18.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:18.04", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true' | jq", "{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false } ] }", "podman pull nginx podman tag docker.io/library/nginx example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/nginx", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/repository?last_modified=true&namespace=testorg&popularity=true&public=true'", "{ \"repositories\": [ { \"namespace\": \"testorg\", \"name\": \"ubuntu\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 27959066, \"configured_quota\": 104857600 }, \"last_modified\": 1651225630, \"popularity\": 0, \"is_starred\": false }, { \"namespace\": \"testorg\", \"name\": \"nginx\", \"description\": null, \"is_public\": false, \"kind\": \"image\", \"state\": \"NORMAL\", \"quota_report\": { \"quota_bytes\": 59231659, \"configured_quota\": 104857600 }, \"last_modified\": 1651229507, \"popularity\": 0, \"is_starred\": false } ] }", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' 'https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg' | jq", "{ \"name\": \"testorg\", \"quotas\": [ { \"id\": 1, \"limit_bytes\": 104857600, \"limits\": [] } ], \"quota_report\": { \"quota_bytes\": 87190725, \"configured_quota\": 104857600 } }", "curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Reject\",\"threshold_percent\":80}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit", "curl -k -X POST -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' -d '{\"type\":\"Warning\",\"threshold_percent\":50}' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota/1/limit", "curl -k -X GET -H \"Authorization: Bearer <token>\" -H 'Content-Type: application/json' https://example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/api/v1/organization/testorg/quota | jq", "[ { \"id\": 1, \"limit_bytes\": 104857600, \"default_config\": false, \"limits\": [ { \"id\": 2, \"type\": \"Warning\", \"limit_percent\": 50 }, { \"id\": 1, \"type\": \"Reject\", \"limit_percent\": 80 } ], \"default_config_exists\": false } ]", "podman pull ubuntu:20.04 podman tag docker.io/library/ubuntu:20.04 example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04 podman push --tls-verify=false example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org/testorg/ubuntu:20.04", "Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0002] failed, retrying in 1s ... (1/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0005] failed, retrying in 1s ... (2/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB WARN[0009] failed, retrying in 1s ... (3/3). Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace Getting image source signatures Copying blob d4dfaa212623 [--------------------------------------] 8.0b / 3.5KiB Copying blob cba97cc5811c [--------------------------------------] 8.0b / 15.0KiB Copying blob 0c78fac124da [--------------------------------------] 8.0b / 71.8MiB Error: Error writing blob: Error initiating layer upload to /v2/testorg/ubuntu/blobs/uploads/ in example-registry-quay-quay-enterprise.apps.docs.gcp.quaydev.org: denied: Quota has been exceeded on namespace", "podman pull <registry_url>/<organization_name>/<quayio_namespace>/<image_name>", "podman pull quay-server.example.com/proxytest/projectquay/quay:3.7.9", "podman pull quay-server.example.com/proxytest/projectquay/quay:3.6.2", "podman pull quay-server.example.com/proxytest/projectquay/quay:3.5.1", "sudo cp rootCA.pem /etc/pki/ca-trust/source/anchors/", "sudo update-ca-trust extract", "helm repo add redhat-cop https://redhat-cop.github.io/helm-charts", "helm repo update", "helm pull redhat-cop/etherpad --version=0.0.4 --untar", "helm package ./etherpad", "Successfully packaged chart and saved it to: /home/user/linux-amd64/etherpad-0.0.4.tgz", "helm registry login quay370.apps.quayperf370.perfscale.devcluster.openshift.com", "helm push etherpad-0.0.4.tgz oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com", "Pushed: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:a6667ff2a0e2bd7aa4813db9ac854b5124ff1c458d170b70c2d2375325f2451b", "rm -rf etherpad-0.0.4.tgz", "helm pull oci://quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad --version 0.0.4", "Pulled: quay370.apps.quayperf370.perfscale.devcluster.openshift.com/etherpad:0.0.4 Digest: sha256:4f627399685880daf30cf77b6026dc129034d68c7676c7e07020b70cf7130902", "oras push --annotation \"quay.expires-after=2d\" \\ 1 --annotation \"expiration = 2d\" \\ 2 quay.io/<organization_name>/<repository>/<image_name>:<tag>", "[✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 561/561 B 100.00% 511ms └─ sha256:9b4f2d43b62534423894d077f0ff0e9e496540ec8b52b568ea8b757fc9e7996b Pushed [registry] quay.io/stevsmit/testorg3/oci-image:v1 ArtifactType: application/vnd.unknown.artifact.v1 Digest: sha256:9b4f2d43b62534423894d077f0ff0e9e496540ec8b52b568ea8b757fc9e7996b", "oras pull quay.io/<organization_name>/<repository>/<image_name>:<tag>", "oras manifest fetch quay.io/<organization_name>/<repository>/<image_name>:<tag>", "{\"schemaVersion\":2,\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"artifactType\":\"application/vnd.unknown.artifact.v1\",\"config\":{\"mediaType\":\"application/vnd.oci.empty.v1+json\",\"digest\":\"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a\",\"size\":2,\"data\":\"e30=\"},\"layers\":[{\"mediaType\":\"application/vnd.oci.empty.v1+json\",\"digest\":\"sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a\",\"size\":2,\"data\":\"e30=\"}],\"annotations\":{\"org.opencontainers.image.created\":\"2024-07-11T15:22:42Z\",\"version \":\" 8.11\"}}", "podman tag <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>", "podman push <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>", "oras attach --artifact-type <MIME_type> --distribution-spec v1.1-referrers-api <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> <example_file>.txt", "-spec v1.1-referrers-api quay.io/testorg3/myartifact-image:v1.0 hi.txt [✓] Exists hi.txt 3/3 B 100.00% 0s └─ sha256:98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4 [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 723/723 B 100.00% 677ms └─ sha256:31c38e6adcc59a3cfbd2ef971792aaf124cbde8118e25133e9f9c9c4cd1d00c6 Attached to [registry] quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Digest: sha256:31c38e6adcc59a3cfbd2ef971792aaf124cbde8118e25133e9f9c9c4cd1d00c6", "oras attach --artifact-type <MIME_type> --distribution-spec v1.1-referrers-tag <myartifact_image> <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag> <example_file>.txt", "[✓] Exists hi.txt 3/3 B 100.00% 0s └─ sha256:98ea6e4f216f2fb4b69fff9b3a44842c38686ca685f3f55dc48c5d3fb1107be4 [✓] Exists application/vnd.oci.empty.v1+json 2/2 B 100.00% 0s └─ sha256:44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8a [✓] Uploaded application/vnd.oci.image.manifest.v1+json 723/723 B 100.00% 465ms └─ sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 Attached to [registry] quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Digest: sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383", "oras discover --insecure --distribution-spec v1.1-referrers-tag <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>", "quay.io/testorg3/myartifact-image@sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da └── doc/example └── sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383", "oras discover --distribution-spec v1.1-referrers-api <quay-server.example.com>/<organization_name>/<repository>/<image_name>:<tag>", "Discovered 3 artifacts referencing v1.0 Digest: sha256:db440c57edfad40c682f9186ab1c1075707ce7a6fdda24a89cb8c10eaad424da Artifact Type Digest sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383 sha256:22b7e167793808f83db66f7d35fbe0088b34560f34f8ead36019a4cc48fd346b sha256:bb2b7e7c3a58fd9ba60349473b3a746f9fe78995a88cb329fc2fd1fd892ea4e4", "FEATURE_REFERRERS_API: true", "echo -n '<username>:<password>' | base64", "abcdeWFkbWluOjE5ODlraWROZXQxIQ==", "curl --location '<quay-server.example.com>/v2/auth?service=<quay-server.example.com>&scope=repository:quay/listocireferrs:pull,push' --header 'Authorization: Basic <base64_username:password_encode_token>' -k | jq", "{ \"token\": \"<example_token_output>...\" }", "GET https://<quay-server.example.com>/v2/<organization_name>/<repository_name>/referrers/sha256:0de63ba2d98ab328218a1b6373def69ec0d0e7535866f50589111285f2bf3fb8 --header 'Authorization: Bearer <v2_bearer_token> -k | jq", "{ \"schemaVersion\": 2, \"mediaType\": \"application/vnd.oci.image.index.v1+json\", \"manifests\": [ { \"mediaType\": \"application/vnd.oci.image.manifest.v1+json\", \"digest\": \"sha256:2d4b54201c8b134711ab051389f5ba24c75c2e6b0f0ff157fce8ffdfe104f383\", \"size\": 793 }, ] }" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html-single/use_red_hat_quay/index
Preface
Preface Red Hat OpenShift Data Foundation 4.9 supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) Google Cloud clusters. Note Only internal OpenShift Data Foundation clusters are supported on Google Cloud. See Planning your deployment for more information about deployment requirements. To deploy OpenShift Data Foundation in internal mode, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and follow the appropriate deployment process based on your requirement: Deploy OpenShift Data Foundation on Google Cloud Deploy standalone Multicloud Object Gateway component
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/preface-ocs-gcp
Chapter 9. Using post processors to modify event messages
Chapter 9. Using post processors to modify event messages Post processors perform lightweight, per-message mutations, similar to the modifications that are performed by single message transformations (SMTs). However, Debezium calls post processors earlier in the event chain than transformations, enabling post processors to act on messages before they are handed off to the messaging runtime. Because post processors can act on messages from within the Debezium context, they are more efficient at modifying event payloads than transformations. For a transformation to modify a message, it must recreate the message's immutable ConnectRecord , or more aptly, its SourceRecord . By contrast, because a post processor acts within the Debezium scope, it can operate on mutable Struct types in the event payload of a message, modifying payloads before the construction of the SourceRecord . Close integration with Debezium provides post processors with access to Debezium internals, such as Debezium metadata about database connections, relational schema model, and so forth. In turn, this access enhances efficiency when performing tasks that rely on such internal information. For example, the Reselect columns post processor can automatically re-query the database to reselect a record and retrieve columns that were excluded from the original change event. Debezium provides the following post processors: Reselect columns Re-selects specific columns that may not have been provided by the change event, such as TOASTed columns or Oracle LOB columns that were not modified by the current change. 9.1. Using the reselect columns post processor to add source fields to change event records To improve performance and reduce storage overhead, databases can use external storage for certain columns. This type of storage is used for columns that store large amounts of data, such as the PostgreSQL TOAST (The Oversized-Attribute Storage Technique), Oracle Large Object (LOB), or the Oracle Exadata Extended String data types. To reduce I/O overhead and increase query speed, when data changes in a table row, the database retrieves only the columns that contain new values, ignoring data in externally stored columns that remain unchanged. As a result, the value of the externally stored column is not recorded in the database log, and Debezium subsequently omits the column when it emits the event record. Downstream consumers that receive event records that omit required values can experience processing errors. IF a value for an externally stored column is not present in the database log entry for an event, when Debezium emits a record for the event, it replaces the missing value with an unavailable.value.placeholder sentinel value. These sentinel values are inserted into appropriately typed fields, for example, a byte array for bytes, a string for strings, or a key-value map for maps. To retrieve data for columns that were not available in the initial query, you can apply the Debezium reselect columns post processor ( ReselectColumnsPostProcessor ). You can configure the post processor to reselect one or more columns from a table. After you configure the post processor, it monitors events that the connector emits for the column names that you designate for reselection. When it detects an event with the specified columns, the post processor re-queries the source tables to retrieve data for the specified columns, and fetches their current state. You can configure the post processor to reselect the following column types: null columns. Columns that contain the unavailable.value.placeholder sentinel value. Note You can use the ReselectColumnsPostProcessor post processor only with Debezium source connectors. The post processor is not designed to work with the Debezium JDBC sink connector. For details about using the ReselectColumnsPostProcessor post processor, see the following topics: Section 9.1.1, "Use of the Debezium ReselectColumnsPostProcessor with keyless tables" Section 9.1.2, "Example: Debezium ReselectColumnsPostProcessor configuration" Section 9.1.3, "Descriptions of Debezium reselect columns post processor configuration properties" 9.1.1. Use of the Debezium ReselectColumnsPostProcessor with keyless tables The reselect columns post processor generates a reselect query that returns the row to be modified. To construct the WHERE clause for the query, by default, the post processor uses a relational table model that is based on the table's primary key columns or on the unique index that is defined for the table. For keyless tables, the SELECT query that ReselectColumnsPostProcessor submits might return multiple rows, in which case Debezium always uses only the first row. You cannot prioritize the order of the returned rows. To enable the post processor to return a consistently usable result for a keyless table, it's best to designate a custom key that can identify a unique row. The custom key must be capable of uniquely identify records in the source table based on a combination of columns. To define such a custom message key, use the message.key.columns property in the connector configuration. After you define a custom key, set the reselect.use.event.key configuration property to true . Setting this option enables the post processor to use the specified event key fields as selection criteria in lieu of a primary key column. Be sure to test the configuration to ensure that the reselection query provides the expected results. 9.1.2. Example: Debezium ReselectColumnsPostProcessor configuration Configuring a post processor is similar to configuring a custom converter or single message transformation (SMT) . To enable the connector to use the ReselectColumnsPostProcessor , add the following entries to the connector configuration: "post.processors": "reselector", 1 "reselector.type": "io.debezium.processors.reselect.ReselectColumnsPostProcessor", 2 "reselector.reselect.columns.include.list": " <schema> . <table> : <column> , <schema> . <table> : <column> ", 3 "reselector.reselect.unavailable.values": "true", 4 "reselector.reselect.null.values": "true" 5 "reselector.reselect.use.event.key": "false" 6 Item Description 1 Comma-separated list of post processor prefixes. 2 The fully-qualified class type name for the post processor. 3 Comma-separated list of column names, specified by using the following format: <schema> . <table> : <column> . 4 Enables or disables reselection of columns that contain the unavailable.value.placeholder sentinel value. 5 Enables or disables reselection of columns that are null . 6 Enables or disables reselection based event key field names. 9.1.3. Descriptions of Debezium reselect columns post processor configuration properties The following table lists the configuration options that you can set for the Reselect Columns post-processor. Table 9.1. Reselect columns post processor configuration options Property Default Description reselect.columns.include.list No default Comma-separated list of column names to reselect from the source database. Use the following format to specify column names: <schema> . <table> : <column> Do not set this property if you set the reselect.columns.exclude.list property. reselect.columns.exclude.list No default Comma-separated list of column names in the source database to exclude from reselection. Use the following format to specify column names: <schema> . <table> : <column> Do not set this property if you set the reselect.columns.include.list property. reselect.unavailable.values true Specifies whether the post processor reselects a column that matches the reselect.columns.include.list filter if the column value is provided by the connector's unavailable.value.placeholder property. reselect.null.values true Specifies whether the post processor reselects a column that matches the reselect.columns.include.list filter if the column value is null . reselect.use.event.key false Specifies whether the post processor reselects based on the event's key field names or uses the relational table's primary key column names. By default, the reselection query is based on the relational table's primary key columns or unique key index. For tables that do not have a primary key, set this property to true , and configure the message.key.columns property in the connector configuration to specify a custom key for the connector to use when it creates events. The post processor then uses the specified key field names as the primary key in the SQL reselection query.
[ "\"post.processors\": \"reselector\", 1 \"reselector.type\": \"io.debezium.processors.reselect.ReselectColumnsPostProcessor\", 2 \"reselector.reselect.columns.include.list\": \" <schema> . <table> : <column> , <schema> . <table> : <column> \", 3 \"reselector.reselect.unavailable.values\": \"true\", 4 \"reselector.reselect.null.values\": \"true\" 5 \"reselector.reselect.use.event.key\": \"false\" 6" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/debezium_user_guide/using-post-processors-to-modify-event-messages
Chapter 3. Red Hat build of OpenJDK 8.0.345 release notes
Chapter 3. Red Hat build of OpenJDK 8.0.345 release notes Review the following release note for an overview of changes from the Red Hat build of OpenJDK 8.0.345 patch release: Reverted disablement of changing the user.dir property In the Red Hat build of OpenJDK 8.0.342 release, the ability to edit the user.dir property was disabled. This change originated in the Red Hat build of OpenJDK 11 release. In the Red Hat build of OpenJDK 8.0.345 release, this change is reverted because of a possible impact to software that relies on settings made in earlier versions of Red Hat build of OpenJDK 8, which supported edits to the user.dir property. In the Red Hat build of OpenJDK 8.0.345 release, you can again make changes to user.dir property. See, JDK-8290832 (JDK Bug System)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.342_and_8.0.345/openjdk-80345-release-notes_openjdk
Chapter 56. The Interceptor APIs
Chapter 56. The Interceptor APIs Abstract Interceptors implement the PhaseInterceptor interface which extends the base Interceptor interface. This interface defines a number of methods used by the Apache CXF's runtime to control interceptor execution and are not appropriate for application developers to implement. To simplify interceptor development, Apache CXF provides a number of abstract interceptor implementations that can be extended. Interfaces All of the interceptors in Apache CXF implement the base Interceptor interface shown in Example 56.1, "Base interceptor interface" . Example 56.1. Base interceptor interface The Interceptor interface defines the two methods that a developer needs to implement for a custom interceptor: handleMessage() The handleMessage() method does most of the work in an interceptor. It is called on each interceptor in a message chain and receives the contents of the message being processed. Developers implement the message processing logic of the interceptor in this method. For detailed information about implementing the handleMessage() method, see Section 58.2, "Processing messages" . handleFault() The handleFault() method is called on an interceptor when normal message processing has been interrupted. The runtime calls the handleFault() method of each invoked interceptor in reverse order as it unwinds an interceptor chain. For detailed information about implementing the handleFault() method, see Section 58.3, "Unwinding after an error" . Most interceptors do not directly implement the Interceptor interface. Instead, they implement the PhaseInterceptor interface shown in Example 56.2, "The phase interceptor interface" . The PhaseInterceptor interface adds four methods that allow an interceptor the participate in interceptor chains. Example 56.2. The phase interceptor interface Abstract interceptor class Instead of directly implementing the PhaseInterceptor interface, developers should extend the AbstractPhaseInterceptor class. This abstract class provides implementations for the phase management methods of the PhaseInterceptor interface. The AbstractPhaseInterceptor class also provides a default implementation of the handleFault() method. Developers need to provide an implementation of the handleMessage() method. They can also provide a different implementation for the handleFault() method. The developer-provided implementations can manipulate the message data using the methods provided by the generic org.apache.cxf.message.Message interface. For applications that work with SOAP messages, Apache CXF provides an AbstractSoapInterceptor class. Extending this class provides the handleMessage() method and the handleFault() method with access to the message data as an org.apache.cxf.binding.soap.SoapMessage object. SoapMessage objects have methods for retrieving the SOAP headers, the SOAP envelope, and other SOAP metadata from the message.
[ "package org.apache.cxf.interceptor; public interface Interceptor<T extends Message> { void handleMessage(T message) throws Fault; void handleFault(T message); }", "package org.apache.cxf.phase; public interface PhaseInterceptor<T extends Message> extends Interceptor<T> { Set<String> getAfter(); Set<String> getBefore(); String getId(); String getPhase(); }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFInterceptorImplClass
Chapter 4. Testing the setup
Chapter 4. Testing the setup Before the HA cluster setup is put into production, it needs to be thoroughly tested to verify that everything works as expected and also to allow the operators to get experience with how the HA cluster will behave in certain situations and how to bring the setup back to a healthy state in case a failure occurs. At least the following tests should be carried out: Perform a manual move of the primary SAP HANA instance via HA cluster commands. Expected result: a takeover should be triggered on the SAP HANA side, promoting the secondary SAP HANA instance to become the new primary SAP HANA instance. Depending on the configuration of the AUTOMATED_REGISTER parameter of the SAPHana resource, the HA cluster will either register the former primary instance, as the new secondary automatically, or an operator will have to determine what should happen with the former primary instance Crash the HA cluster node where the primary SAP HANA instance is running. Expected result: the HA cluster node should be fenced and a takeover should be triggered on the SAP HANA side, promoting the secondary SAP HANA instance to become the new primary SAP HANA instance. Depending on the configuration of the AUTOMATED_REGISTER parameter of the SAPHana resource, the HA cluster will either register the former primary instance as the new secondary automatically, or an operator will have to determine what should happen with the former primary instance. Manually stop primary SAP HANA instance outside of HA cluster. Expected result: a takeover should be triggered on the SAP HANA side, promoting the secondary SAP HANA instance to become the new primary SAP HANA instance. Depending on the configuration of the AUTOMATED_REGISTER parameter of the SAPHana resource, the HA cluster will either register the former primary instance as the new secondary automatically, or an operator will have to determine what should happen with the former primary instance. Crash the node where the secondary SAP HANA instance is running. Expected result: the HA cluster node should be fenced and the secondary SAP HANA instance should be started when the HA cluster node is back online and SAP HANA System Replication should resume. Manually stop the secondary SAP HANA instance outside of HA cluster Expected result: the secondary SAP HANA instance should be restarted by the HA cluster Disable the network connection used by SAP HANA System Replication Expected result: the HA cluster should detect that a failure of SAP HANA System Replication has occurred, but should keep the SAP HANA instances running on both nodes
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/automating_sap_hana_scale-up_system_replication_using_the_rhel_ha_add-on/asmb_testing_setup_v9-automating-sap-hana-scale-up-system-replication
Appendix A. Troubleshooting
Appendix A. Troubleshooting This chapter covers common problems and solutions for Red Hat Enterprise Linux 7 virtualization issues. Read this chapter to develop an understanding of some of the common problems associated with virtualization technologies. It is recommended that you experiment and test virtualization on Red Hat Enterprise Linux 7 to develop your troubleshooting skills. If you cannot find the answer in this document, there may be an answer online from the virtualization community. See Section D.1, "Online Resources" for a list of Linux virtualization websites. In addition, you will find further information on troubleshooting virtualization in RHEL 7 in the Red Hat Knowledgebase . A.1. Debugging and Troubleshooting Tools This section summarizes the system administrator applications, the networking utilities, and debugging tools. You can use these standard system administration tools and logs to assist with troubleshooting: kvm_stat - Retrieves KVM runtime statistics. For more information, see Section A.4, "kvm_stat" . ftrace - Traces kernel events. For more information, see the What is ftrace and how do I use it? solution article (subscription required) . vmstat - Displays virtual memory statistics. For more information, use the man vmstat command. iostat - Provides I/O load statistics. For more information, see the Red Hat Enterprise Linux Performance Tuning Guide lsof - Lists open files. For more information, use the man lsof command. systemtap - A scripting utility for monitoring the operating system. For more information, see the Red Hat Enterprise Linux Developer Guide . crash - Analyzes kernel crash dump data or a live system. For more information, see the Red Hat Enterprise Linux Kernel Crash Dump Guide . sysrq - A key combination that the kernel responds to even if the console is unresponsive. For more information, see the Red Hat Knowledge Base . These networking utilities can assist with troubleshooting virtualization networking problems: ip addr , ip route , and ip monitor tcpdump - diagnoses packet traffic on a network. This command is useful for finding network abnormalities and problems with network authentication. There is a graphical version of tcpdump , named wireshark . brctl - A networking utility that inspects and configures the Ethernet bridge configuration in the Linux kernel. For example: Listed below are some other useful commands for troubleshooting virtualization: strace is a command which traces system calls and events received and used by another process. vncviewer connects to a VNC server running on your server or a virtual machine. Install vncviewer using the yum install tigervnc command. vncserver starts a remote desktop on your server. Gives you the ability to run graphical user interfaces, such as virt-manager, using a remote session. Install vncserver using the yum install tigervnc-server command. In addition to all the commands listed above, examining virtualization logs can be helpful. For more information, see Section A.6, "Virtualization Logs" .
[ "brctl show bridge-name bridge-id STP enabled interfaces ----------------------------------------------------------------------------- virtbr0 8000.feffffff yes eth0 brctl showmacs virtbr0 port-no mac-addr local? aging timer 1 fe:ff:ff:ff:ff: yes 0.00 2 fe:ff:ff:fe:ff: yes 0.00 brctl showstp virtbr0 virtbr0 bridge-id 8000.fefffffffff designated-root 8000.fefffffffff root-port 0 path-cost 0 max-age 20.00 bridge-max-age 20.00 hello-time 2.00 bridge-hello-time 2.00 forward-delay 0.00 bridge-forward-delay 0.00 aging-time 300.01 hello-timer 1.43 tcn-timer 0.00 topology-change-timer 0.00 gc-timer 0.02" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/appe-troubleshooting
Chapter 5. Multi-site configuration and administration
Chapter 5. Multi-site configuration and administration As a storage administrator, you can configure and administer multiple Ceph Object Gateways for a variety of use cases. You can learn what to do during a disaster recovery and failover events. Also, you can learn more about realms, zones, and syncing policies in multi-site Ceph Object Gateway environments. A single zone configuration typically consists of one zone group containing one zone and one or more ceph-radosgw instances where you may load-balance gateway client requests between the instances. In a single zone configuration, typically multiple gateway instances point to a single Ceph storage cluster. However, Red Hat supports several multi-site configuration options for the Ceph Object Gateway: Multi-zone: A more advanced configuration consists of one zone group and multiple zones, each zone with one or more ceph-radosgw instances. Each zone is backed by its own Ceph Storage Cluster. Multiple zones in a zone group provides disaster recovery for the zone group should one of the zones experience a significant failure. Each zone is active and may receive write operations. In addition to disaster recovery, multiple active zones may also serve as a foundation for content delivery networks. Multi-zone-group: Formerly called 'regions', the Ceph Object Gateway can also support multiple zone groups, each zone group with one or more zones. Objects stored to zone groups within the same realm share a global namespace, ensuring unique object IDs across zone groups and zones. Multiple Realms: The Ceph Object Gateway supports the notion of realms, which can be a single zone group or multiple zone groups and a globally unique namespace for the realm. Multiple realms provides the ability to support numerous configurations and namespaces. Prerequisites A healthy running Red Hat Ceph Storage cluster. Deployment of the Ceph Object Gateway software. 5.1. Requirements and Assumptions A multi-site configuration requires at least two Ceph storage clusters, and At least two Ceph object gateway instances, one for each Ceph storage cluster. This guide assumes at least two Ceph storage clusters in geographically separate locations; however, the configuration can work on the same physical site. This guide also assumes four Ceph object gateway servers named rgw1 , rgw2 , rgw3 and rgw4 respectively. A multi-site configuration requires a master zone group and a master zone. Additionally, each zone group requires a master zone. Zone groups might have one or more secondary or non-master zones. Important When planning network considerations for multi-site, it is important to understand the relation bandwidth and latency observed on the multi-site synchronization network and the clients ingest rate in direct correlation with the current sync state of the objects owed to the secondary site. The network link between Red Hat Ceph Storage multi-site clusters must be able to handle the ingest into the primary cluster to maintain an effective recovery time on the secondary site. Multi-site synchronization is asynchronous and one of the limitations is the rate at which the sync gateways can process data across the link. An example to look at in terms of network inter-connectivity speed could be 1 GbE or inter-datacenter connectivity, for every 8 TB or cumulative receive data, per client gateway. Thus, if you replicate to two other sites, and ingest 16 TB a day, you need 6 GbE of dedicated bandwidth for multi-site replication. Red Hat also recommends private Ethernet or Dense wavelength-division multiplexing (DWDM) as a VPN over the internet is not ideal due to the additional overhead incurred. Important The master zone within the master zone group of a realm is responsible for storing the master copy of the realm's metadata, including users, quotas and buckets (created by the radosgw-admin CLI). This metadata gets synchronized to secondary zones and secondary zone groups automatically. Metadata operations executed with the radosgw-admin CLI MUST be executed on a host within the master zone of the master zone group in order to ensure that they get synchronized to the secondary zone groups and zones. Currently, it is possible to execute metadata operations on secondary zones and zone groups, but it is NOT recommended because they WILL NOT be synchronized, leading to fragmented metadata. Note For new Ceph Object Gateway deployment in multi-site, it takes around 20 minutes to sync metadata operations to the secondary site. In the following examples, the rgw1 host will serve as the master zone of the master zone group; the rgw2 host will serve as the secondary zone of the master zone group; the rgw3 host will serve as the master zone of the secondary zone group; and the rgw4 host will serve as the secondary zone of the secondary zone group. Important Red Hat recommends to use load balancer and three Ceph Object Gateway daemons to have sync end points with multi-site. For the non-syncing Ceph Object Gateway nodes in a multi-site configuration, which are dedicated for client I/O operations through load balancers, run the ceph config set client.rgw.CLIENT_NODE rgw_run_sync_thread false command to prevent them from performing sync operations, and then restart the Ceph Object Gateway. Following is a typical configuration file for HAProxy for syncing gateways: Example 5.2. Pools Red Hat recommends using the Ceph Placement Group's per Pool Calculator to calculate a suitable number of placement groups for the pools the radosgw daemon will create. Set the calculated values as defaults in the Ceph configuration database. Example Note Making this change to the Ceph configuration will use those defaults when the Ceph Object Gateway instance creates the pools. Alternatively, you can create the pools manually. Pool names particular to a zone follow the naming convention ZONE_NAME . POOL_NAME . For example, a zone named us-east will have the following pools: .rgw.root us-east.rgw.control us-east.rgw.meta us-east.rgw.log us-east.rgw.buckets.index us-east.rgw.buckets.data us-east.rgw.buckets.non-ec us-east.rgw.meta:users.keys us-east.rgw.meta:users.email us-east.rgw.meta:users.swift us-east.rgw.meta:users.uid Additional Resources See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for details on creating pools. 5.3. Migrating a single site system to multi-site To migrate from a single site system with a default zone group and zone to a multi-site system, use the following steps: Create a realm. Replace REALM_NAME with the realm name. Syntax Rename the default zone and zonegroup. Replace NEW_ZONE_GROUP_NAME and NEW_ZONE_NAME with the zonegroup and zone name respectively. Syntax Rename the default zonegroup's api_name . Replace NEW_ZONE_GROUP_NAME with the zonegroup name. The api_name field in the zonegroup map refers to the name of the RADOS API used for data replication across different zones. This field helps clients interact with the correct APIs for accessing and managing data within the Ceph storage cluster. Syntax Configure the primary zonegroup. Replace NEW_ZONE_GROUP_NAME with the zonegroup name and REALM_NAME with realm name. Replace ENDPOINT with the fully qualified domain names in the zonegroup. Syntax Configure the primary zone. Replace REALM_NAME with realm name, NEW_ZONE_GROUP_NAME with the zonegroup name, NEW_ZONE_NAME with the zone name, and ENDPOINT with the fully qualified domain names in the zonegroup. Syntax Create a system user. Replace USER_ID with the username. Replace DISPLAY_NAME with a display name. It can contain spaces. Syntax Commit the updated configuration: Example Grep for the rgw service name Syntax Setup the configurations for realm, zonegroup and the primary zone. Syntax Example Restart the Ceph Object Gateway: Example Syntax Example Note Restoration from a multisite to a single (default zone) site is not supported. 5.4. Establishing a secondary zone Zones within a zone group replicate all data to ensure that each zone has the same data. When creating the secondary zone, issue ALL of the radosgw-admin zone operations on a host identified to serve the secondary zone. Note To add a additional zones, follow the same procedures as for adding the secondary zone. Use a different zone name. Important Run the metadata operations, such as user creation and quotas, on a host within the master zone of the master zonegroup. The master zone and the secondary zone can receive bucket operations from the RESTful APIs, but the secondary zone redirects bucket operations to the master zone. If the master zone is down, bucket operations will fail. If you create a bucket using the radosgw-admin CLI, you must run it on a host within the master zone of the master zone group so that the buckets will synchronize with other zone groups and zones. Bucket creation for a particular user is not supported, even if you create a user in the secondary zone with --yes-i-really-mean-it . Prerequisites At least two running Red Hat Ceph Storage clusters. At least two Ceph Object Gateway instances, one for each Red Hat Ceph Storage cluster. Root-level access to all the nodes. Nodes or containers are added to the storage cluster. All Ceph Manager, Monitor, and OSD daemons are deployed. Procedure Log into the cephadm shell: Example Pull the primary realm configuration from the host: Syntax Example Pull the primary period configuration from the host: Syntax Example Configure a secondary zone: Note All zones run in an active-active configuration by default; that is, a gateway client might write data to any zone and the zone will replicate the data to all other zones within the zone group. If the secondary zone should not accept write operations, specify the `--read-only flag to create an active-passive configuration between the master zone and the secondary zone. Additionally, provide the access_key and secret_key of the generated system user stored in the master zone of the master zone group. Syntax Example Optional: Delete the default zone: Important Do not delete the default zone and its pools if you are using the default zone and zone group to store data. Example Update the Ceph configuration database: Syntax Example Commit the changes: Syntax Example Outside the cephadm shell, fetch the FSID of the storage cluster and the processes: Example Start the Ceph Object Gateway daemon: Syntax Example 5.5. Configuring the archive zone Note Ensure you have a realm before configuring a zone as an archive. Without a realm, you cannot archive data through an archive zone for default zone/zonegroups. Important The object storage archive zone in Red Hat Ceph Storage 7.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Archive Object data residing on Red Hat Ceph Storage using the Object Storage Archive Zone Feature. The archive zone uses multi-site replication and S3 object versioning feature in Ceph Object Gateway. The archive zone retains all version of all the objects available, even when deleted in the production file. The archive zone has a history of versions of S3 objects that can only be eliminated through the gateways that are associated with the archive zone. It captures all the data updates and metadata to consolidate them as versions of S3 objects. Bucket granular replication to the archive zone can be used after creating an archive zone. You can control the storage space usage of an archive zone through the bucket Lifecycle policies, where you can define the number of versions you would like to keep for an object. An archive zone helps protect your data against logical or physical errors. It can save users from logical failures, such as accidentally deleting a bucket in the production zone. It can also save your data from massive hardware failures, like a complete production site failure. Additionally, it provides an immutable copy, which can help build a ransomware protection strategy. To implement the bucket granular replication, use the sync policies commands for enabling and disabling policies. See Creating a sync policy group and Modifying a sync policy group for more information. Note Using the sync policy group procedures is optional and only necessary to use enabling and disabling with bucket granular replication. For using the archive zone without bucket granular replication, it is not necessary to use the sync policy procedures. If you want to migrate the storage cluster from single site, see Migrating a single site system to multi-site . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor node. Installation of the Ceph Object Gateway software. Procedure During new zone creation, use the archive tier to configure the archive zone. Syntax Example From the archive zone, modify the archive zone to sync from only the primary zone and perform a period update commit. Syntax Note The recommendation is to reduce the max_objs_per_shard to 50K to account for the omap olh entries in the archive zone. This helps in keeping the number of omap entries per bucket index shard object in check to prevent large omap warnings. For example, Additional resources See the Deploying a multi-site Ceph Object Gateway using the Ceph Orchestrator section in the Red Hat Ceph Storage Object Gateway Guide for more details. 5.5.1. Deleting objects in archive zone You can use an S3 lifecycle policy extension to delete objects within an <ArchiveZone> element. Important Archive zone objects can only be deleted using the expiration lifecycle policy rule. If any <Rule> section contains an <ArchiveZone> element, that rule executes in archive zone and are the ONLY rules which run in an archive zone. Rules marked <ArchiveZone> do NOT execute in non-archive zones. The rules within the lifecycle policy determine when and what objects to delete. For more information about lifecycle creation and management, see Bucket lifecycle . Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor node. Installation of the Ceph Object Gateway software. Procedure Set the <ArchiveZone> lifecycle policy rule. For more information about creating a lifecycle policy, see See the Creating a lifecycle management policy section in the Red Hat Ceph Storage Object Gateway Guide for more details. Example Optional: See if a specific lifecycle policy contains an archive zone rule. Syntax Example 1 1 The archive zone rule. This is an example of a lifecycle policy with an archive zone rule. If the Ceph Object Gateway user is deleted, the buckets at the archive site owned by that user is inaccessible. Link those buckets to another Ceph Object Gateway user to access the data. Syntax Example Additional resources See the Bucket lifecycle section in the Red Hat Ceph Storage Object Gateway Guide for more details. See the S3 bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for more details. 5.5.2. Failover and disaster recovery Recover your data from different failover and disaster scenarios. Use any of the following procedures, according to your needs: Primary zone failure with a failover to the secondary zone for disaster recovery Switching archive zone sync from primary to secondary zone with I/Os in progress Archive zone syncs only from primary zone with primary zone failure Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor node. Installation of the Ceph Object Gateway software. 5.5.2.1. Primary zone failure with a failover to the secondary zone for disaster recovery Procedure Make the secondary zone the primary and default zone. For example: Syntax By default, Ceph Object Gateway runs in an active-active configuration. If the cluster was configured to run in an active-passive configuration, the secondary zone is a read-only zone. Remove the --read-only status to allow the zone to receive write operations. For example: Syntax Update the period to make the changes take effect: Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. Restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example Restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example If the former primary zone recovers, revert the operation. From the recovered zone, pull the realm from the current primary zone: Syntax Make the recovered zone the primary and default zone: Syntax Update the period to make the changes take effect: Example Restart the Ceph Object Gateway in the recovered zone: Syntax Example If the secondary zone needs to be a read-only configuration, update the secondary zone: Syntax Update the period to make the changes take effect: Example Restart the Ceph Object Gateway in the secondary zone: Syntax Example 5.5.2.2. Switching Archive Zone sync from primary to secondary with I/Os in progress Procedure Switch the sync source to the secondary zone: Syntax Verify the sync status and data consistency in the archive zone. Switch the sync source back to the primary zone: Syntax Verify the sync status and data consistency in the archive zone after the switch. As a result: The archive zone starts syncing from the secondary zone after the first modification. Data consistency is maintained throughout the switch. Upon switching back to the primary zone, the archive zone resumes syncing from the primary zone without data loss or corruption. Sync remains consistent in all the zones. 5.5.2.3. Archive Zone Syncs Only from Primary Zone with Primary Zone Failure Procedure Ensure the archive zone is set to sync only from the primary zone. Stop the primary zone gateways to simulate a primary zone failure. Failover to the secondary zone and perform the period update commit. Observe the sync status of the archive zone. Post a time interval of about 30 minutes, restart the primary zone gateways. Verify that the archive zone resumes syncing from the primary zone. As a result: The archive zone stops syncing when the primary zone fails. After restoring the primary zone, the archive zone automatically resumes syncing from the primary zone. Data integrity and sync status is maintained throughout the process. 5.6. Configuring multiple realms in the same storage cluster You can configure multiple realms in the same storage cluster. This is a more advanced use case for multi-site. Configuring multiple realms in the same storage cluster enables you to use a local realm to handle local Ceph Object Gateway client traffic, as well as a replicated realm for data that will be replicated to a secondary site. Note Red Hat recommends that each realm has its own Ceph Object Gateway. Prerequisites Two running Red Hat Ceph Storage data centers in a storage cluster. The access key and secret key for each data center in the storage cluster. Root-level access to all the Ceph Object Gateway nodes. Each data center has its own local realm. They share a realm that replicates on both sites. Procedure Create one local realm on the first data center in the storage cluster: Syntax Example Create one local master zonegroup on the first data center: Syntax Example Create one local zone on the first data center: Syntax Example Commit the period: Example You can either deploy the Ceph Object Gateway daemons with the appropriate realm and zone or update the configuration database: Deploy the Ceph Object Gateway using placement specification: Syntax Example Update the Ceph configuration database: Syntax Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Create one local realm on the second data center in the storage cluster: Syntax Example Create one local master zonegroup on the second data center: Syntax Example Create one local zone on the second data center: Syntax Example Commit the period: Example You can either deploy the Ceph Object Gateway daemons with the appropriate realm and zone or update the configuration database: Deploy the Ceph Object Gateway using placement specification: Syntax Example Update the Ceph configuration database: Syntax Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Create a replicated realm on the first data center in the storage cluster: Syntax Example Use the --default flag to make the replicated realm default on the primary site. Create a master zonegroup for the first data center: Syntax Example Create a master zone on the first data center: Syntax Example Create a synchronization user and add the system user to the master zone for multi-site: Syntax Example Commit the period: Example You can either deploy the Ceph Object Gateway daemons with the appropriate realm and zone or update the configuration database: Deploy the Ceph Object Gateway using placement specification: Syntax Example Update the Ceph configuration database: Syntax Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Pull the replicated realm on the second data center: Syntax Example Pull the period from the first data center: Syntax Example Create the secondary zone on the second data center: Syntax Example Commit the period: Example You can either deploy the Ceph Object Gateway daemons with the appropriate realm and zone or update the configuration database: Deploy the Ceph Object Gateway using placement specification: Syntax Example Update the Ceph configuration database: Syntax Example Restart the Ceph Object Gateway. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Log in as root on the endpoint for the second data center. Verify the synchronization status on the master realm: Syntax Example Log in as root on the endpoint for the first data center. Verify the synchronization status for the replication-synchronization realm: Syntax Example To store and access data in the local site, create the user for local realm: Syntax Example Important By default, users are created under the default realm. For the users to access data in the local realm, the radosgw-admin command requires the --rgw-realm argument. 5.7. Using multi-site sync policies As a storage administrator, you can use multi-site sync policies at the bucket level to control data movement between buckets in different zones. These policies are called bucket-granularity sync policies. Previously, all buckets within zones were treated symmetrically. This means that each zone contained a mirror copy of a given bucket, and the copies of buckets were identical in all of the zones. The sync process assumed that the bucket sync source and the bucket sync destination referred to the same bucket. Important Bucket sync policies apply to data only, and metadata is synced across all the zones in the multi-site irrespective of the presence of the the bucket sync policies. Objects that were created, modified, or deleted when the bucket sync policy was in allowed or forbidden place, it does not automatically sync when policy takes effect. Run the bucket sync run command to sync these objects. Important If there are multiple sync policies defined at zonegroup level, only one policy can be in enabled state at any time. We can toggle between policies if needed The sync policy supersedes the old zone group coarse configuration ( sync_from* ). The sync policy can be configured at the zone group level. If it is configured, it replaces the old-style configuration at the zone group level, but it can also be configured at the bucket level. Important The bucket sync policies are applicable to the archive zones. The movement from an archive zone is not bidirectional wherein all the objects can be moved from active zone to archive zone. However, you cannot move objects from the archive zone to active zone since archive zone is read-only. Example for bucket sync policy for zone groups Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to a Ceph Monitor node. Installation of the Ceph Object Gateway software. 5.7.1. Multi-site sync policy group state In the sync policy, multiple groups that can contain lists of data-flow configurations can be defined, as well as lists of pipe configurations. The data-flow defines the flow of data between the different zones. It can define symmetrical data flow, in which multiple zones sync data from each other, and it can define directional data flow, in which the data moves in one way from one zone to another. A pipe defines the actual buckets that can use these data flows, and the properties that are associated with it, such as the source object prefix. A sync policy group can be in 3 states: enabled - sync is allowed and enabled. allowed - sync is allowed. forbidden - sync, as defined by this group, is not allowed. When the zones replicate, you can disable replication for specific buckets using the sync policy. The following are the semantics that need to be followed to resolve the policy conflicts: Zonegroup Bucket Result enabled enabled enabled enabled allowed enabled enabled forbidden disabled allowed enabled enabled allowed allowed disabled allowed forbidden disabled forbidden enabled disabled forbidden allowed disabled forbidden forbidden disabled For multiple group polices that are set to reflect for any sync pair ( SOURCE_ZONE , SOURCE_BUCKET ), ( DESTINATION_ZONE , DESTINATION_BUCKET ), the following rules are applied in the following order: Even if one sync policy is forbidden , the sync is disabled . At least one policy should be enabled for the sync to be allowed . Sync states in this group can override other groups. A policy can be defined at the bucket level. A bucket level sync policy inherits the data flow of the zonegroup policy, and can only define a subset of what the zonegroup allows. A wildcard zone, and a wildcard bucket parameter in the policy defines all relevant zones, or all relevant buckets. In the context of a bucket policy, it means the current bucket instance. A disaster recovery configuration where entire zones are mirrored does not require configuring anything on the buckets. However, for a fine grained bucket sync it would be better to configure the pipes to be synced by allowing ( status=allowed ) them at the zonegroup level (for example, by using wildcard). However, enable the specific sync at the bucket level ( status=enabled ) only. If needed, the policy at the bucket level can limit the data movement to specific relevant zones. Important Any changes to the zonegroup policy need to be applied on the zonegroup master zone, and require period update and commit. Changes to the bucket policy need to be applied on the zonegroup master zone. Ceph Object Gateway handles these changes dynamically. S3 bucket replication API The S3 bucket replication API is implemented, and allows users to create replication rules between different buckets. Note though that while the AWS replication feature allows bucket replication within the same zone, Ceph Object Gateway does not allow it at the moment. However, the Ceph Object Gateway API also added a Zone array that allows users to select to what zones the specific bucket will be synced. Additional Resources See S3 bucket replication API for more details. 5.7.2. Retrieving the current policy You can use the get command to retrieve the current zonegroup sync policy, or a specific bucket policy. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Retrieve the current zonegroup sync policy or bucket policy. To retrieve a specific bucket policy, use the --bucket option: Syntax Example 5.7.3. Creating a sync policy group You can create a sync policy group for the current zone group, or for a specific bucket. When creating a sync policy for bucket granular replication for a sync policy group that has changed from forbidden to enabled , a manual update might be necessary to complete the sync process. For example, if any data is written to bucket1 when its policy is forbidden , the data might not sync properly across zones after the policy is changed to enabled . To properly sync the changes, run the bucket sync run command on the sync policy. This step is also necessary if the bucket is resharded when the policy is forbidden . In this case the bucket sync run command must also be used after enabling the policy. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. When creating for an archive zone, be sure that the archive zone is created before the sync policy group. Procedure Create a sync policy group or a bucket policy. To create a bucket policy, use the --bucket option: Syntax Example Optional: Manually complete the sync process for bucket granular replication. Note This step is mandatory when using as part of an archive zone with bucket granular replication if the policy has data written or the bucket was resharded when the policy was forbidden . Syntax Example Additional Resources For more information about configuring an archive zone and bucket granular replication, see Configuring the archive zone . 5.7.4. Modifying a sync policy group You can modify an existing sync policy group for the current zone group, or for a specific bucket. When modifying a sync policy for bucket granular replication for a sync policy group that has changed from forbidden to enabled , a manual update might be necessary in order to complete the sync process. For example, if any data is written to bucket1 when its policy is forbidden , the data might not sync properly across zones after the policy is changed to enabled . To properly sync the changes, run the bucket sync run command on the sync policy. This step is also necessary if the bucket is resharded when the policy is forbidden . In this case the bucket sync run command must also be used after enabling the policy. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. When modifying for an archive zone, be sure that the archive zone is created before the sync policy group. Procedure Modify the sync policy group or a bucket policy. To modify a bucket policy, use the --bucket option. Syntax Example Optional: Manually complete the sync process for bucket granular replication. Note This step is mandatory when using as part of an archive zone with bucket granular replication if the policy has data written or the bucket was resharded when the policy was forbidden . Syntax Example Additional Resources For more information about configuring an archive zone and bucket granular replication, see Configuring the archive zone . 5.7.5. Getting a sync policy group You can use the group get command to show the current sync policy group by group ID, or to show a specific bucket policy. If the --bucket option is not provided, the groups created at zonegroup level is retrieved and not the groups at the bucket-level. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Show the current sync policy group or bucket policy. To show a specific bucket policy, use the --bucket option: Syntax Example 5.7.6. Removing a sync policy group You can use the group remove command to remove the current sync policy group by group ID, or to remove a specific bucket policy. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Remove the current sync policy group or bucket policy. To remove a specific bucket policy, use the --bucket option: Syntax Example 5.7.7. Creating a sync flow You can create two different types of flows for a sync policy group or for a specific bucket: Directional sync flow Symmetrical sync flow The group flow create command creates a sync flow. If you issue the group flow create command for a sync policy group or bucket that already has a sync flow, the command overwrites the existing settings for the sync flow and applies the settings you specify. Option Description Required/Optional --bucket Name of the bucket to which the sync policy needs to be configured. Used only in bucket-level sync policy. Optional --group-id ID of the sync group. Required --flow-id ID of the flow. Required --flow-type Types of flows for a sync policy group or for a specific bucket - directional or symmetrical. Required --source-zone To specify the source zone from which sync should happen. Zone that send data to the sync group. Required if flow type of sync group is directional. Optional --dest-zone To specify the destination zone to which sync should happen. Zone that receive data from the sync group. Required if flow type of sync group is directional. Optional --zones Zones that part of the sync group. Zones mention will be both sender and receiver zone. Specify zones separated by ",". Required if flow type of sync group is symmetrical. Optional Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Create or update a directional sync flow. To create or update directional sync flow for a specific bucket, use the --bucket option. Syntax Create or update a symmetrical sync flow. To specify multiple zones for a symmetrical flow type, use a comma-separated list for the --zones option. Syntax zones are comma-separated lists of all zones that need to be added to the flow. 5.7.8. Removing sync flows and zones The group flow remove command removes sync flows or zones from a sync policy group or bucket. For sync policy groups or buckets using directional flows, group flow remove command removes the flow. For sync policy groups or buckets using symmetrical flows, you can use the group flow remove command to remove specified zones from the flow, or to remove the flow. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Remove a directional sync flow. To remove the directional sync flow for a specific bucket, use the --bucket option. Syntax Remove specific zones from a symmetrical sync flow. To remove multiple zones from a symmetrical flow, use a comma-separated list for the --zones option. Syntax Remove a symmetrical sync flow. To remove the sync flow at the zonegroup level, remove the --bucket option. Syntax 5.7.9. Creating or modifying a sync group pipe As a storage administrator, you can define pipes to specify which buckets can use your configured data flows and the properties associated with those data flows. The sync group pipe create command enables you to create pipes, which are custom sync group data flows between specific buckets or groups of buckets, or between specific zones or groups of zones. This command uses the following options: Option Description Required/Optional --bucket Name of the bucket to which sync policy need to be configured. Used only in bucket-level sync policy. Optional --group-id ID of the sync group Required --pipe-id ID of the pipe Required --source-zones Zones that send data to the sync group. Use single quotes (') for value. Use commas to separate multiple zones. Use the wildcard * for all zones that match the data flow rules. Required --source-bucket Bucket or buckets that send data to the sync group. If bucket name is not mentioned, then * (wildcard) is taken as the default value. At bucket-level, source bucket will be the bucket for which the sync group created and at zonegroup-level, source bucket will be all buckets. Optional --source-bucket-id ID of the source bucket. Optional --dest-zones Zone or zones that receive the sync data. Use single quotes (') for value. Use commas to separate multiple zones. Use the wildcard * for all zones that match the data flow rules. Required --dest-bucket Bucket or buckets that receive the sync data. If bucket name is not mentioned, then * (wildcard) is taken as the default value. At bucket-level, destination bucket will be the bucket for which the sync group is created and at zonegroup-level, destination bucket will be all buckets Optional --dest-bucket-id ID of the destination bucket. Optional --prefix Bucket prefix. Use the wildcard * to filter for source objects. Optional --prefix-rm Do not use bucket prefix for filtering. Optional --tags-add Comma-separated list of key=value pairs. Optional --tags-rm Removes one or more key=value pairs of tags. Optional --dest-owner Destination owner of the objects from source. Optional --storage-class Destination storage class for the objects from source. Optional --mode Use system for system mode or user for user mode. Optional --uid Used for permissions validation in user mode. Specifies the user ID under which the sync operation will be issued. Optional Note If you want to enable/disable sync for a specific bucket at a zonegroup level, set the zonegroup level sync policy to enable/disable and create a pipe for each bucket with --source-bucket and --dest-bucket with the same bucket name or with bucket-id , i.e, --source-bucket-id and --dest-bucket-id . Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Create the sync group pipe. The create command is also used to update a command by creating the sync group pipe with only the relevant options. Syntax 5.7.10. Modifying or deleting a sync group pipe As a storage administrator, you can use the sync group pipe modify command or sync group pipe remove command to modify the sync group pipe by removing certain options. You can also use sync group pipe remove command to remove zones, buckets, or the sync group pipe completely. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. Procedure Modify the sync group pipe options with the modify argument. Syntax Note Ensure to put the zones in single quotes ('). The source bucket does not need the quotes. Example Modify the sync group pipe options with the remove argument. Syntax Example Delete a sync group pipe. Syntax Example 5.7.11. Obtaining information about sync operations The sync info command enables you to get information about the expected sync sources and targets, as defined by the sync policy. When you create a sync policy for a bucket, that policy defines how data moves from that bucket toward a different bucket in a different zone. Creating the policy also creates a list of bucket dependencies that are used as hints whenever that bucket syncs with another bucket. Note that a bucket can refer to another bucket without actually syncing to it, since syncing depends on whether the data flow allows the sync to take place. Both the --bucket and effective-zone-name parameters are optional. If you invoke the sync info command without specifying any options, the Object Gateway returns all of the sync operations defined by the sync policy in all zones. Prerequisites A running Red Hat Ceph Storage cluster. Root or sudo access. The Ceph Object Gateway is installed. A group sync policy is defined. Procedure Get information about sync operations for buckets: Syntax Get information about sync operations at the zonegroup level: Syntax 5.8. Bucket granular sync policies The following features are now supported: Greenfield deployment : This release supports new multi-site deployments. To set up bucket granular sync replication, a new zonegroup/zone must be configured at a minimum. Brownfield deployment : Migrate or upgrade Ceph Object Gateway multi-site replication configurations to the newly featured Ceph Object Gateway bucket granular sync policy replication. Note Ensure that all the nodes in the storage cluster are in the same schema during the upgrade. Data flow - directional, symmetrical : Both unidirectional and bi-directional/symmetrical replication can be configured. Important In this release, the following features are not supported: Source filters Storage class Destination owner translation User mode When the sync policy of bucket or zonegroup, moves from disabled to enabled state, the below behavioral changes are observed: Normal scenario : Zonegroup level: Data written when the sync policy is disabled catches up as soon as it's enabled , with no additional steps. Bucket level: Data written when the sync policy is disabled does not catch up when the policy is enabled . In this case, either one of the below two workarounds can be applied: Writing new data to the bucket re-synchronizes the old data. Executing bucket sync run command syncs all the old data. Note When you want to toggle from the sync policy to the legacy policy, you need to first run the sync init command followed by the radosgw-admin bucket sync run command to sync all the objects. Reshard scenario : Zonegroup level: Any reshard that happens when the policy is disabled , sync gets stuck after the policy is enabled again. New objects also do not sync at this point. Run the bucket sync run command as a workaround. Bucket level: If any bucket is resharded when the policy is disabled , sync gets stuck after the policy is enabled again. New objects also do not sync at this point. Run the bucket sync run command as a workaround. Note When the policy is set to enabled for the zonegroup and the policy is set to enabled or allowed for the bucket, the pipe configuration takes effect from zonegroup level and not at the bucket level. This is a known issue BZ#2240719 . 5.8.1. Setting bi-directional policy for zonegroups Zonegroup sync policies are created with the new sync policy engine. Any change to the zonegroup sync policy requires a period update and a commit. In the below example, create a group policy and define a data flow for the movement of data from one zone to another. Configure a pipe for the zonegroups to define the buckets that can use this data flow. The system in the below examples include 3 zones: us-east (the master zone), us-west , and us-west-2 . Prerequisites A running Red Hat Ceph Storage cluster. The Ceph Object Gateway is installed. Procedure Create a new sync group with the status set to allowed . Example Note Until a fully configured zonegroup replication policy is created, it is recommended to set the --status to allowed , to prevent the replication from starting. Create a flow policy for the newly created group with the --flow-type set as symmetrical to enable bi-directional replication. Example Create a new pipe called pipe . Example Note Use the * wildcard for zones to include all zones set in the flow policy, and * for buckets to replicate all existing buckets in the zones. After configuring the bucket sync policy, set the --status to enabled . Example Update and commit the new period. Example Note Updating and committing the period is mandatory for a zonegroup policy. Optional: Check the sync source, and destination for a specific bucket. All buckets in zones us-east and us-west replicates bi-directionally. Example The id field in the above output reflects the pipe rule that generated that entry. A single rule can generate multiple sync entries as seen in the below example. 5.8.2. Setting directional policy for zonegroups Set the policy for zone groups uni directionally with the sync policy engine. In the following example, create a group policy and configure the data flow for the movement of data from one zone to another. Also, configure a pipe for the zonegroups to define the buckets that can use this data flow. The system in the following examples includes 3 zones: us-east (the primary zone), us-west (secondary zone), and us-west-2 (backup zone). Here, us-west-2 is a replica of us-west , but data is not replicated back from it. Prerequisites A running Red Hat Ceph Storage cluster. The Ceph Object Gateway is installed. Procedure On the primary zone, create a new sync group with the status set to allowed . Syntax Example Note Until a fully configured zonegroup replication policy is created, it is recommended to set the --status to allowed , to prevent the replication from starting. Create the flow. Syntax Example Create the pipe. Syntax Example Update and commit the new period. Example Note Updating and committing the period is mandatory for a zonegroup policy. Verify source and destination of zonegroup using sync info in both the sites. Syntax 5.8.3. Setting directional policy for buckets Set the policy for buckets uni directionally with the sync policy engine. In the following example, create a group policy and configure the data flow for the movement of data from one zone to another. Also, configure a pipe for the zonegroups to define the buckets that can use this data flow. The system in the following examples includes 3 zones: us-east (the primary zone), us-west (secondary zone), and us-west-2 (backup zone). Here, us-west-2 is a replica of us-west , but data is not replicated back from it. The difference between setting the directional policy for zonegroups and buckets is that you need to specify the --bucket option. Prerequisites A running Red Hat Ceph Storage cluster. The Ceph Object Gateway is installed. Procedure On the primary zone, create a new sync group with the status set to allowed . Syntax Example Note Until a fully configured zonegroup replication policy is created, it is recommended to set the --status to allowed , to prevent the replication from starting. Create the flow. Syntax Example Create the pipe. Syntax Example Verify source and destination of zonegroup using sync info in both the sites. Syntax 5.8.4. Setting bi-directional policy for buckets The data flow for the bucket-level policy is inherited from the zonegroup policy. The data flow and pipes need not be changed for the bucket-level policy, as the bucket-level policy flow and pipes are only be a subset of the flow defined in the zonegroup policy. Note A bucket-level policy can enable pipes that are not enabled, except forbidden , at the zonegroup policy. Bucket-level policies do not require period updates. Prerequisites A running Red Hat Ceph Storage cluster. The Ceph Object Gateway is installed. A sync flow is created. Procedure Set the zonegroup policy --status to allowed to permit per-bucket replication. Example Update the period after modifying the zonegroup policy. Example Create a sync group for the bucket we want to synchronize to and set --status to enabled . Example Create a pipe for the group that was created in the step. The flow is inherited from the zonegroup level policy where data flow is symmetrical. Example Note Use wildcards * to specify the source and destination zones for the bucket replication. Optional: To retrieve information about the expected bucket sync sources and targets, as defined by the sync policy, run the radosgw-admin bucket sync info command with the --bucket flag. Example Optional: To retrieve information about the expected sync sources and targets, as defined by the sync policy, run the radosgw-admin sync info command with the --bucket flag. Example 5.8.5. Syncing between buckets You can sync data between the source and destination buckets across zones, but not within the same zone. Note that internally, data is still pulled from the source at the destination zone. A wildcard bucket name means that the current bucket is in the context of the bucket sync policy. There are two types of syncing between buckets: Syncing from a bucket - You need to specify the source bucket. Syncing to a bucket - You need to specify the destination bucket. Prerequisites A running Red Hat Ceph Storage cluster. The Ceph Object Gateway is installed. Syncing from a different bucket Create a sync group to pull data from a bucket of another zone. Syntax Example Pull data. Syntax Example In this example, you can see that the source bucket is buck5 . Optional: Sync from buckets in specific zones. Example Check sync status. Syntax Example Note that there are resolved-hints , which means that the bucket buck5 found about buck4 syncing from it indirectly, and not from its own policy. The policy for buck5 itself is empty. Syncing to a different bucket Create a sync group. Syntax Example Push data. Syntax Example In this example, you can see that the destination bucket is buck5 . Optional: Sync to buckets in specific zones. Example Check sync status. Syntax Example 5.8.6. Filtering objects Filter objects within the bucket with prefixes and tags. You can set object filter at zonegroup level policy as well. If the --bucket option is used, then it is set at bucket level for a bucket. In the following example, sync from buck1 bucket from one zone is synced to buck1 bucket in another zone with the objects that start with the prefix foo/ . Similarly, you can filter objects that have tags such as color=blue . Prefixes and tags can be combined, in which objects need to have both in the order to be synced. The priority parameter can also be passed, and it is used to determine when multiple different rules are there that are matches. This configuration determines that rules to be used. Note If the tag in sync policy has more than one tags, while syncing objects, it sync objects that match at least one tag, the key value pair. Objects might not match all the tag sets. If the prefix and tag is set, then to sync the object to another zone, the objects must have prefix and any one tag should match. Only then, it syncs with each other. Prerequisites At least two running Red Hat Ceph Storage clusters. The Ceph Object Gateway is installed. Buckets are created. Procedure Create a new sync group. If you want to create at the bucket level, use the --bucket option. Syntax Example Sync between buckets where the object matches the tags. The flow is inherited from the zonegroup level policy where data flow is symmetrical. Syntax Example Sync between buckets where the object matches the prefix. The flow is inherited from the zonegroup level policy where data flow is symmetrical. Syntax Example Check the updated sync. Syntax Example Note In the example, only two different destinations and no sources reflect, one each for configuration. When the sync process happens, it selects the relevant rule for each object it syncs. 5.8.7. Disabling policy between buckets You can disable the policy between buckets along with sync information with the forbidden or allowed state. See the Multi-site sync policy group state for the different combinations that can used for zonegroup and bucket level sync policies. In certain cases, to interrupt the replication between two buckets, you can set the group policy for the bucket to be forbidden . You can also disable policy at zonegroup level, if the sync that is set does not happen for any of the buckets. Note You can also create a sync policy in disabled state with allowed or forbidden state using the radosgw-admin sync group create command. Prerequisites A running Red Hat Ceph Storage cluster. The Ceph Object Gateway is installed. Procedure Run the sync group modify command to change the status from allowed to forbidden . Example In this example, the replication of the bucket buck is interrupted between zones us-east and us-west . Note No update and commit for the period is required as this is a bucket sync policy. Optional: Run sync info command command to check the status of the sync for bucket buck . Example Note There are no source and destination targets as the replication is interrupted. 5.8.8. Using destination parameters A pipe configuration specifies the specific buckets that utilize the data flow established for a group, along with its associated properties. In pipe configuration, the destination parameter specifies the location of the objects, if the destination bucket or zone matches the pipe configuration. There are 3 types of destination parameters: Storage class Destination owner translation User mode 5.8.8.1. Destination Params: Storage Class Using this method, you can replicate objects to a specific storage class in the destination. Syntax for configuring storage class of the destination objects: Example: 5.8.8.2. Destination Params: Destination Owner Translation Using this method, you can replicate objects from the source bucket to the destination bucket of different owners without configuring any user or bucket policies. Syntax for setting the destination objects owner as the destination bucket owner. This requires specifying the UID of the destination bucket: Example: 5.8.8.3. Destination Params: User Mode User mode ensures that the user has permissions to both read the objects, and write to the destination bucket. This requires that the uid of the user (which in its context the operation executes) is specified. To replicate objects to the destination bucket in the destination zone, the user must have both read and write permissions for that bucket. By default, the user mode is set to system (--mode=system). To configure the user mode, specify the mode as user along with the user ID (--uid) of the user. You can set the user ID for the user mode to execute the sync operation for permissions validation. Note When a non-admin user sets the mode as system at the zone-group level with a symmetrical flow: The bucket owner can only write IO to their bucket, and it will sync. The user whose mode is set to system can only write to their bucket and not to other users buckets. The admin or the system user can write to all buckets, and it will sync to the other site. When --mode=system for any user who is not an admin user, the behavior is the same as mode user . User can write only if there is permission to write or read to the destination bucket If the admin/system user sets mode as --mode=user , then the non-admin user can write objects to its owned bucket. Objects will not sync to the destination as the mode is user. Syntax: Example: 5.9. Multi-site Ceph Object Gateway command line usage As a storage administrator, you can have a good understanding of how to use the Ceph Object Gateway in a multi-site environment. You can learn how to better manage the realms, zone groups, and zones in a multi-site environment. Prerequisites A running Red Hat Ceph Storage. Deployment of the Ceph Object Gateway software. Access to a Ceph Object Gateway node or container. 5.9.1. Realms A realm represents a globally unique namespace consisting of one or more zonegroups containing one or more zones, and zones containing buckets, which in turn contain objects. A realm enables the Ceph Object Gateway to support multiple namespaces and their configuration on the same hardware. A realm contains the notion of periods. Each period represents the state of the zone group and zone configuration in time. Each time you make a change to a zonegroup or zone, update the period and commit it. Red Hat recommends creating realms for new clusters. 5.9.1.1. Creating a realm To create a realm, use the realm create command and specify the realm name. Procedure Create a realm. Syntax Example Important Do not use the realm with the --default flag if the data and metadata are stored in the default.rgw.data and default.rgw.index pools. If a new realm is set as the default and these pools contain important data, the radosgw-admin utility can fail to manage this data properly. Only use the --default flag if necessary to specify the realm as the default and if you do not need the existing data or metadata in the default.rgw pools. If the existing data or metadata is needed, either migrate the default configuration to a multi-site or realm setup or avoid setting the new realm as the default. For more information about migrating to a multi-site, see Migrating a single site system to multi-site . By specifying --default , the realm is called implicitly with each radosgw-admin call unless --rgw-realm and the realm name are explicitly provided. Optional: Change the default realm. Syntax Example 5.9.1.2. Making a Realm the Default One realm in the list of realms should be the default realm. There may be only one default realm. If there is only one realm and it wasn't specified as the default realm when it was created, make it the default realm. Alternatively, to change which realm is the default, run the following command: Note When the realm is default, the command line assumes --rgw-realm= REALM_NAME as an argument. 5.9.1.3. Deleting a Realm To delete a realm, run the realm delete command and specify the realm name. Syntax Example 5.9.1.4. Getting a realm To get a realm, run the realm get command and specify the realm name. Syntax Example The CLI will echo a JSON object with the realm properties. Use > and an output file name to output the JSON object to a file. 5.9.1.5. Setting a realm To set a realm, run the realm set command, specify the realm name, and --infile= with an input file name. Syntax Example 5.9.1.6. Listing realms To list realms, run the realm list command: Example 5.9.1.7. Listing Realm Periods To list realm periods, run the realm list-periods command. Example 5.9.1.8. Pulling a Realm To pull a realm from the node containing the master zone group and master zone to a node containing a secondary zone group or zone, run the realm pull command on the node that will receive the realm configuration. Syntax 5.9.1.9. Renaming a Realm A realm is not part of the period. Consequently, renaming the realm is only applied locally, and will not get pulled with realm pull . Important When renaming a realm with multiple zones, run the command on each zone. Procedure Rename the realm. Syntax Note Do NOT use realm set to change the name parameter. That changes the internal name only. Specifying --rgw-realm would still use the old realm name. Example Commit the changes. Syntax Example 5.9.2. Zone Groups The Ceph Object Gateway supports multi-site deployments and a global namespace by using the notion of zone groups. Formerly called a region, a zone group defines the geographic location of one or more Ceph Object Gateway instances within one or more zones. Configuring zone groups differs from typical configuration procedures, because not all of the settings end up in a Ceph configuration file. You can list zone groups, get a zone group configuration, and set a zone group configuration. Note The radosgw-admin zonegroup operations can be performed on any node within the realm, because the step of updating the period propagates the changes throughout the cluster. However, radosgw-admin zone operations MUST be performed on a host within the zone. 5.9.2.1. Creating a Zone Group Creating a zone group consists of specifying the zone group name. Creating a zone assumes it will live in the default realm unless --rgw-realm= REALM_NAME is specified. If the zonegroup is the master zonegroup, specify the --master flag. Important Do not create the zone group with the --default flag if the data and metadata are stored in the default.rgw.data and default.rgw.index pools. If a new zone group is set as the default and these pools contain important data, the radosgw-admin utility can fail to manage this data properly. Only use the --default flag if necessary to specify the zone group as the default and if you do not need the existing data or metadata in the default.rgw pools. If the existing data or metadata is needed, either migrate the default configuration to a multi-site or zone group setup or avoid setting the new zone group as the default. For more information about migrating to a multi-site, see Migrating a single site system to multi-site . By specifying --default , the zone group is called implicitly with each radosgw-admin call unless --rgw-zonegroup and the zone group name are explicitly provided. Procedure Create a zone group. Syntax Example Optional: Change a zone group setting. Syntax Example Optional: Change the default zone group. Syntax Example Commit the change. Syntax Example 5.9.2.2. Making a Zone Group the Default One zonegroup in the list of zonegroups should be the default zonegroup. There may be only one default zonegroup. If there is only one zonegroup and it wasn't specified as the default zonegroup when it was created, make it the default zonegroup. Alternatively, to change which zonegroup is the default, run the following command: Example Note When the zonegroup is the default, the command line assumes --rgw-zonegroup= ZONE_GROUP_NAME as an argument. Then, update the period: 5.9.2.3. Adding a Zone to a Zone Group To add a zone to a zonegroup, you MUST run this command on a host that will be in the zone. To add a zone to a zonegroup, run the following command: Syntax Then, update the period: Example 5.9.2.4. Removing a Zone from a Zone Group To remove a zone from a zonegroup, run the following command: Syntax Then, update the period: Example 5.9.2.5. Renaming a Zone Group To rename a zonegroup, run the following command: Syntax Then, update the period: Example 5.9.2.6. Deleting a Zone group To delete a zonegroup, run the following command: Syntax Then, update the period: Example 5.9.2.7. Listing Zone Groups A Ceph cluster contains a list of zone groups. To list the zone groups, run the following command: The radosgw-admin returns a JSON formatted list of zone groups. 5.9.2.8. Getting a Zone Group To view the configuration of a zone group, run the following command: Syntax The zone group configuration looks like this: { "id": "abd3004c-9933-4157-a91e-de3fb0584f3e", "name": "shared", "api_name": "shared", "is_master": true, "endpoints": [ "http://pluto003:5000" ], "hostnames": [], "hostnames_s3website": [], "master_zone": "26a46c38-f7ce-4d97-b356-c251415c062b", "zones": [ { "id": "26a46c38-f7ce-4d97-b356-c251415c062b", "name": "primary", "endpoints": [ "http://pluto003:5000" ], "log_meta": false, "log_data": true, "bucket_index_max_shards": 11, "read_only": false, "tier_type": "", "sync_from_all": true, "sync_from": [], "redirect_zone": "", "supported_features": [ "compress-encrypted", "resharding" ] }, { "id": "4fa4be7c-4ecd-4a2d-83b4-0e4d5a9d915f", "name": "archive", "endpoints": [ "http://pluto010:5000" ], "log_meta": false, "log_data": true, "bucket_index_max_shards": 11, "read_only": false, "tier_type": "archive", "sync_from_all": false, "sync_from": [ "primary" ], "redirect_zone": "", "supported_features": [ "compress-encrypted", "resharding" ] }, { "id": "e3792738-60c4-4069-adf3-7f253b622197", "name": "secondary", "endpoints": [ "http://pluto006:5000" ], "log_meta": false, "log_data": true, "bucket_index_max_shards": 11, "read_only": false, "tier_type": "", "sync_from_all": true, "sync_from": [], "redirect_zone": "", "supported_features": [ "compress-encrypted", "resharding" ] } ], "placement_targets": [ { "name": "default-placement", "tags": [], "storage_classes": [ "STANDARD" ] } ], "default_placement": "default-placement", "realm_id": "2b7aa9ac-17cb-4a3e-9119-9065368cd3a8", "sync_policy": { "groups": [] }, "enabled_features": [ "resharding" ] } Field Description id A unique string assigned to the zone group. name Name of the zone group. api_name Name of the RADOS API used for data replication across different zones; usually same as the "name" field unless specified differently. is_master True if the zone group is master; otherwise false. There can be multiple zone groups in multisite configuration. endpoints Endpoints specified during zone group.configuration. hostnames Optional field to specify. hostnames. hostnames_s3website Hostnames for S3. website. master_zone Zone designated as the metadata master during zone creation with the parameter --master. zones List of zones participating in sync within the zone group. log_meta Optional parameters to turn on/off metadata logging. log_data Optional parameters to turn on/off the data logging. bucket_index_max_shards Default number of bucket index shards for newly created buckets stored in the zone. read_only False by default (read-write zone); true if the zone is explicitly configured to be read-only. tier_type Zone configured with a sync module tier type such as archive zone, Elasticsearch, pub/sub, or cloud sync modules. sync_from_all True by default when the zone syncs from all other zones in the zone group; false if the zone is intended to sync only from one or a few zones specified by the "sync_from" field. sync_from List of one or more zone names to sync from. supported_features Feature list available on the zone, such as compression/encryption, resharding, and so on. placement_targets Shows default and custom placement information, including the bucket index pool, storage classes, and so on. sync_policy Sync policy configuration information if present. enabled_features Features that are enabled on the zone, such as compression/encryption, resharding, etc. 5.9.2.9. Setting a Zone Group Defining a zone group consists of creating a JSON object, specifying at least the required settings: name : The name of the zone group. Required. api_name : The API name for the zone group. Optional. is_master : Determines if the zone group is the master zone group. Required. Note: You can only have one master zone group. endpoints : A list of all the endpoints in the zone group. For example, you may use multiple domain names to refer to the same zone group. Remember to escape the forward slashes ( \/ ). You may also specify a port ( fqdn:port ) for each endpoint. Optional. hostnames : A list of all the hostnames in the zone group. For example, you may use multiple domain names to refer to the same zone group. Optional. The rgw dns name setting will automatically be included in this list. You should restart the gateway daemon(s) after changing this setting. master_zone : The master zone for the zone group. Optional. Uses the default zone if not specified. Note You can only have one master zone per zone group. zones : A list of all zones within the zone group. Each zone has a name (required), a list of endpoints (optional), and whether or not the gateway will log metadata and data operations (false by default). placement_targets : A list of placement targets (optional). Each placement target contains a name (required) for the placement target and a list of tags (optional) so that only users with the tag can use the placement target (i.e., the user's placement_tags field in the user info). default_placement : The default placement target for the object index and object data. Set to default-placement by default. You may also set a per-user default placement in the user info for each user. To set a zone group, create a JSON object consisting of the required fields, save the object to a file, for example, zonegroup.json ; then, run the following command: Example Where zonegroup.json is the JSON file you created. Important The default zone group is_master setting is true by default. If you create a new zone group and want to make it the master zone group, you must either set the default zone group is_master setting to false , or delete the default zone group. Finally, update the period: Example 5.9.2.10. Setting a Zone Group Map Setting a zone group map consists of creating a JSON object consisting of one or more zone groups, and setting the master_zonegroup for the cluster. Each zone group in the zone group map consists of a key/value pair, where the key setting is equivalent to the name setting for an individual zone group configuration, and the val is a JSON object consisting of an individual zone group configuration. You may only have one zone group with is_master equal to true , and it must be specified as the master_zonegroup at the end of the zone group map. The following JSON object is an example of a default zone group map. To set a zone group map, run the following command: Example Where zonegroupmap.json is the JSON file you created. Ensure that you have zones created for the ones specified in the zone group map. Finally, update the period. Example 5.9.3. Zones Ceph Object Gateway supports the notion of zones. A zone defines a logical group consisting of one or more Ceph Object Gateway instances. Configuring zones differs from typical configuration procedures, because not all of the settings end up in a Ceph configuration file. You can list zones, get a zone configuration, and set a zone configuration. Important All radosgw-admin zone operations MUST be issued on a host that operates or will operate within the zone. 5.9.3.1. Creating a Zone To create a zone, specify a zone name. If it is a master zone, specify the --master option. Only one zone in a zone group can be a master zone. To add the zone to a zonegroup, specify the --rgw-zonegroup option with the zonegroup name. Important Zones must be created on a Ceph Object Gateway node that will be within the zone. Important Do not create the zone with the --default flag if the data and metadata are stored in the default.rgw.data and default.rgw.index pools. If a new zone is set as the default and these pools contain important data, the radosgw-admin utility can fail to manage this data properly. Only use the --default flag if necessary to specify the zone as the default and if you do not need the existing data or metadata in the default.rgw pools. If the existing data or metadata is needed, either migrate the default configuration to a multi-site or zone setup or avoid setting the new zone as the default. For more information about migrating to a multi-site, see Migrating a single site system to multi-site . By specifying --default , the zone is called implicitly with each radosgw-admin call unless --rgw-zone and the zone name are explicitly provided. Procedure Create the zone. Syntax Commit the change. Syntax Example 5.9.3.2. Deleting a zone To delete a zone, first remove it from the zonegroup. Procedure Remove the zone from the zonegroup: Syntax Update the period: Example Delete the zone: Important This procedure MUST be used on a host within the zone. Syntax Update the period: Example Important Do not delete a zone without removing it from a zone group first. Otherwise, updating the period will fail. If the pools for the deleted zone will not be used anywhere else, consider deleting the pools. Replace DELETED_ZONE_NAME in the example below with the deleted zone's name. Important Once Ceph deletes the zone pools, it deletes all of the data within them in an unrecoverable manner. Only delete the zone pools if Ceph clients no longer need the pool contents. Important In a multi-realm cluster, deleting the .rgw.root pool along with the zone pools will remove ALL the realm information for the cluster. Ensure that .rgw.root does not contain other active realms before deleting the .rgw.root pool. Syntax Important After deleting the pools, restart the RGW process. 5.9.3.3. Modifying a Zone To modify a zone, specify the zone name and the parameters you wish to modify. Important Zones should be modified on a Ceph Object Gateway node that will be within the zone. Syntax Then, update the period: Example 5.9.3.4. Listing Zones As root , to list the zones in a cluster, run the following command: Example 5.9.3.5. Getting a Zone As root , to get the configuration of a zone, run the following command: Syntax The default zone looks like this: { "id": "49408bb1-7e63-4324-b713-3d7778352f2c", "name": "zg1-2", "domain_root": "zg1-2.rgw.meta:root", "control_pool": "zg1-2.rgw.control", "gc_pool": "zg1-2.rgw.log:gc", "lc_pool": "zg1-2.rgw.log:lc", "log_pool": "zg1-2.rgw.log", "intent_log_pool": "zg1-2.rgw.log:intent", "usage_log_pool": "zg1-2.rgw.log:usage", "roles_pool": "zg1-2.rgw.meta:roles", "reshard_pool": "zg1-2.rgw.log:reshard", "user_keys_pool": "zg1-2.rgw.meta:users.keys", "user_email_pool": "zg1-2.rgw.meta:users.email", "user_swift_pool": "zg1-2.rgw.meta:users.swift", "user_uid_pool": "zg1-2.rgw.meta:users.uid", "otp_pool": "zg1-2.rgw.otp", "notif_pool": "zg1-2.rgw.log:notif", "topics_pool": "zg1-2.rgw.meta:topics", "account_pool": "zg1-2.rgw.meta:accounts", "group_pool": "zg1-2.rgw.meta:groups", "system_key": { "access_key": "1234567890", "secret_key": "pencil" }, "placement_pools": [ { "key": "default-placement", "val": { "index_pool": "zg1-2.rgw.buckets.index", "storage_classes": { "STANDARD": { "data_pool": "zg1-2.rgw.buckets.data" } }, "data_extra_pool": "zg1-2.rgw.buckets.non-ec", "index_type": 0, "inline_data": true } } ], "realm_id": "7b65ec9b-149d-4200-8bb0-0390565c11e6" } Field Description id Unique string assigned to the zone. domain_root Root pool where all system configuration related to zone lies. control_pool Used for internal watch-notify mechanism. gc_pool Garbage collection pool. lc_pool Endpoints specified during zone group configuration. log_pool Multisite related datalog and mdlog is stored in this pool. usage_log_pool Usage logging that accumulates statistics about user operations. roles_pool Stores user roles information. reshard_pool Stores bucket resharding log entries. user_keys_pool Pool contains access keys and secret keys for each user ID. user_email_pool Pool contains email addresses associated to a user ID. user_swift_pool Pool contains the Swift subuser information for a user ID. user_uid_pool Stores user IDs. otp_pool Pool to store multi factor authentication related information. notif_pool Persistent delivery notification related information. topics_pool Store notification topics. account_pool Stores user account related information. system_key Shows system access and secret key used internally for multisite sync. placement_pools Shows default and custom placement information including the bucket index pool storage classes and so on. realm_id If multisite is configured,shows the realm id., resharding, etc. 5.9.3.6. Setting a Zone Configuring a zone involves specifying a series of Ceph Object Gateway pools. For consistency, we recommend using a pool prefix that is the same as the zone name. See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for details on configuring pools. Important Zones should be set on a Ceph Object Gateway node that will be within the zone. To set a zone, create a JSON object consisting of the pools, save the object to a file, for example, zone.json ; then, run the following command, replacing ZONE_NAME with the name of the zone: Example Where zone.json is the JSON file you created. Then, as root , update the period: Example 5.9.3.7. Renaming a Zone To rename a zone, specify the zone name and the new zone name. Issue the following command on a host within the zone: Syntax Then, update the period: Example
[ "cat ./haproxy.cfg global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 7000 user haproxy group haproxy daemon stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 30s timeout server 30s timeout http-keep-alive 10s timeout check 10s timeout client-fin 1s timeout server-fin 1s maxconn 6000 listen stats bind 0.0.0.0:1936 mode http log global maxconn 256 clitimeout 10m srvtimeout 10m contimeout 10m timeout queue 10m JTH start stats enable stats hide-version stats refresh 30s stats show-node ## stats auth admin:password stats uri /haproxy?stats stats admin if TRUE frontend main bind *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app maxconn 6000 backend static balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000 backend app balance roundrobin fullconn 6000 server app8 host01:8080 check maxconn 2000 server app9 host02:8080 check maxconn 2000 server app10 host03:8080 check maxconn 2000", "ceph config set osd osd_pool_default_pg_num 50 ceph config set osd osd_pool_default_pgp_num 50", "radosgw-admin realm create --rgw-realm REALM_NAME --default", "radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name NEW_ZONE_GROUP_NAME radosgw-admin zone rename --rgw-zone default --zone-new-name NEW_ZONE_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME", "radosgw-admin zonegroup modify --api-name NEW_ZONE_GROUP_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME", "radosgw-admin zonegroup modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --endpoints http://ENDPOINT --master --default", "radosgw-admin zone modify --rgw-realm REALM_NAME --rgw-zonegroup NEW_ZONE_GROUP_NAME --rgw-zone NEW_ZONE_NAME --endpoints http://ENDPOINT --master --default", "radosgw-admin user create --uid USER_ID --display-name DISPLAY_NAME --access-key ACCESS_KEY --secret SECRET_KEY --system", "radosgw-admin period update --commit", "ceph orch ls | grep rgw", "ceph config set client.rgw.SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw.SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw.SERVICE_NAME rgw_zone PRIMARY_ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-1", "systemctl restart ceph-radosgw@rgw.`hostname -s`", "ceph orch restart _RGW_SERVICE_NAME_", "ceph orch restart rgw.rgwsvcid.mons-1.jwgwwp", "cephadm shell", "radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin realm pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin period pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin period pull --url=http://10.74.249.26:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zone create --rgw-zonegroup=_ZONE_GROUP_NAME_ --rgw-zone=_SECONDARY_ZONE_NAME_ --endpoints=http://_RGW_SECONDARY_HOSTNAME_:_RGW_PRIMARY_PORT_NUMBER_1_ --access-key=_SYSTEM_ACCESS_KEY_ --secret=_SYSTEM_SECRET_KEY_ [--read-only]", "radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-2 --endpoints=http://rgw2:80 --access-key=LIPEYZJLTWXRKXS9LPJC --secret-key=IsAje0AVDNXNw48LjMAimpCpI7VaxJYSnfD0FFKQ", "radosgw-admin zone rm --rgw-zone=default ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it ceph osd pool rm default.rgw.meta default.rgw.meta --yes-i-really-really-mean-it ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone SECONDARY_ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm test_realm ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup us ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone us-east-2", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "systemctl list-units | grep ceph", "systemctl start ceph- FSID @ DAEMON_NAME systemctl enable ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]_realm.us-east-2.host04.ahdtsw.service systemctl enable [email protected]_realm.us-east-2.host04.ahdtsw.service", "radosgw-admin zone create --rgw-zonegroup={ ZONE_GROUP_NAME } --rgw-zone={ ZONE_NAME } --endpoints={http:// FQDN : PORT },{http:// FQDN : PORT } --tier-type=archive", "radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east --endpoints={http://example.com:8080} --tier-type=archive", "radosgw-admin zone modify --rgw-zone archive --sync_from primary --sync_from_all false --sync-from-rm secondary radosgw-admin period update --commit", "ceph config set client.rgw rgw_max_objs_per_shard 50000", "<?xml version=\"1.0\" ?> <LifecycleConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <Rule> <ID>delete-1-days-az</ID> <Filter> <Prefix></Prefix> <ArchiveZone /> 1 </Filter> <Status>Enabled</Status> <Expiration> <Days>1</Days> </Expiration> </Rule> </LifecycleConfiguration>", "radosgw-admin lc get --bucket BUCKET_NAME", "radosgw-admin lc get --bucket test-bkt { \"prefix_map\": { \"\": { \"status\": true, \"dm_expiration\": true, \"expiration\": 0, \"noncur_expiration\": 2, \"mp_expiration\": 0, \"transitions\": {}, \"noncur_transitions\": {} } }, \"rule_map\": [ { \"id\": \"Rule 1\", \"rule\": { \"id\": \"Rule 1\", \"prefix\": \"\", \"status\": \"Enabled\", \"expiration\": { \"days\": \"\", \"date\": \"\" }, \"noncur_expiration\": { \"days\": \"2\", \"date\": \"\" }, \"mp_expiration\": { \"days\": \"\", \"date\": \"\" }, \"filter\": { \"prefix\": \"\", \"obj_tags\": { \"tagset\": {} }, \"archivezone\": \"\" 1 }, \"transitions\": {}, \"noncur_transitions\": {}, \"dm_expiration\": true } } ] }", "radosgw-admin bucket link --uid NEW_USER_ID --bucket BUCKET_NAME --yes-i-really-mean-it", "radosgw-admin bucket link --uid arcuser1 --bucket arc1-deleted-da473fbbaded232dc5d1e434675c1068 --yes-i-really-mean-it", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default --read-only=false", "radosgw-admin period update --commit", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm pull --url= URL_TO_PRIMARY_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --master --default", "radosgw-admin period update --commit", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only radosgw-admin zone modify --rgw-zone= ZONE_NAME --read-only", "radosgw-admin period update --commit", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin zone modify --rgw-zone archive --sync_from secondary --sync_from_all false --sync-from-rm primary radosgw-admin period update --commit", "radosgw-admin zone modify --rgw-zone archive --sync_from primary --sync_from_all false --sync-from-rm secondary radosgw-admin period update --commit", "radosgw-admin realm create --rgw-realm= REALM_NAME --default", "radosgw-admin realm create --rgw-realm=ldc1 --default", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=ldc1zg --endpoints=http://rgw1:80 --rgw-realm=ldc1 --master --default", "radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]", "radosgw-admin zone create --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z --master --default --endpoints=http://rgw.example.com", "radosgw-admin period update --commit", "ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw rgw --realm=ldc1 --zone=ldc1z --placement=\"1 host01\"", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc1z", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm create --rgw-realm= REALM_NAME --default", "radosgw-admin realm create --rgw-realm=ldc2 --default", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME --endpoints=http:// RGW_NODE_NAME :80 --rgw-realm= REALM_NAME --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=ldc2zg --endpoints=http://rgw2:80 --rgw-realm=ldc2 --master --default", "radosgw-admin zone create --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]", "radosgw-admin zone create --rgw-zonegroup=ldc2zg --rgw-zone=ldc2z --master --default --endpoints=http://rgw.example.com", "radosgw-admin period update --commit", "ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw rgw --realm=ldc2 --zone=ldc2z --placement=\"1 host01\"", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm ldc2 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup ldc2zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone ldc2z", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm create --rgw-realm= REPLICATED_REALM_1 --default", "radosgw-admin realm create --rgw-realm=rdc1 --default", "radosgw-admin zonegroup create --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=http://_RGW_NODE_NAME :80 --rgw-realm=_RGW_REALM_NAME --master --default", "radosgw-admin zonegroup create --rgw-zonegroup=rdc1zg --endpoints=http://rgw1:80 --rgw-realm=rdc1 --master --default", "radosgw-admin zone create --rgw-zonegroup= RGW_ZONE_GROUP --rgw-zone=_MASTER_RGW_NODE_NAME --master --default --endpoints= HTTP_FQDN [, HTTP_FQDN ]", "radosgw-admin zone create --rgw-zonegroup=rdc1zg --rgw-zone=rdc1z --master --default --endpoints=http://rgw.example.com", "radosgw-admin user create --uid=\" SYNCHRONIZATION_USER \" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone= RGW_ZONE --access-key= ACCESS_KEY --secret= SECRET_KEY", "radosgw-admin user create --uid=\"synchronization-user\" --display-name=\"Synchronization User\" --system radosgw-admin zone modify --rgw-zone=rdc1zg --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8", "radosgw-admin period update --commit", "ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw rgw --realm=rdc1 --zone=rdc1z --placement=\"1 host01\"", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc1z", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin realm pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8", "radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key= ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin period pull --url=https://tower-osd1.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8", "radosgw-admin zone create --rgw-zone= RGW_ZONE --rgw-zonegroup= RGW_ZONE_GROUP --endpoints=https://tower-osd4.cephtips.com --access-key=_ACCESS_KEY --secret-key= SECRET_KEY", "radosgw-admin zone create --rgw-zone=rdc2z --rgw-zonegroup=rdc1zg --endpoints=https://tower-osd4.cephtips.com --access-key=3QV0D6ZMMCJZMSCXJ2QJ --secret-key=VpvQWcsfI9OPzUCpR4kynDLAbqa1OIKqRB6WEnH8", "radosgw-admin period update --commit", "ceph orch apply rgw SERVICE_NAME --realm= REALM_NAME --zone= ZONE_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 \"", "ceph orch apply rgw rgw --realm=rdc1 --zone=rdc2z --placement=\"1 host04\"", "ceph config set client.rgw. SERVICE_NAME rgw_realm REALM_NAME ceph config set client.rgw. SERVICE_NAME rgw_zonegroup ZONE_GROUP_NAME ceph config set client.rgw. SERVICE_NAME rgw_zone ZONE_NAME", "ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_realm rdc1 ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zonegroup rdc1zg ceph config set client.rgw.rgwsvcid.mons-1.jwgwwp rgw_zone rdc2z", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin sync status", "radosgw-admin sync status realm 59762f08-470c-46de-b2b1-d92c50986e67 (ldc2) zonegroup 7cf8daf8-d279-4d5c-b73e-c7fd2af65197 (ldc2zg) zone 034ae8d3-ae0c-4e35-8760-134782cb4196 (ldc2z) metadata sync no sync (zone is master)", "radosgw-admin sync status --rgw-realm RGW_REALM_NAME", "radosgw-admin sync status --rgw-realm rdc1 realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source realm 73c7b801-3736-4a89-aaf8-e23c96e6e29d (rdc1) zonegroup d67cc9c9-690a-4076-89b8-e8127d868398 (rdc1zg) zone 67584789-375b-4d61-8f12-d1cf71998b38 (rdc2z) metadata sync syncing full sync: 0/64 shards incremental sync: 64/64 shards metadata is caught up with master data sync source: 705ff9b0-68d5-4475-9017-452107cec9a0 (rdc1z) syncing full sync: 0/128 shards incremental sync: 128/128 shards data is caught up with source", "radosgw-admin user create --uid=\" LOCAL_USER\" --display-name=\"Local user\" --rgw-realm=_REALM_NAME --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME", "radosgw-admin user create --uid=\"local-user\" --display-name=\"Local user\" --rgw-realm=ldc1 --rgw-zonegroup=ldc1zg --rgw-zone=ldc1z", "radosgw-admin sync info --bucket=buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west-2\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }", "radosgw-admin sync policy get --bucket= BUCKET_NAME", "radosgw-admin sync policy get --bucket=mybucket", "radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden", "radosgw-admin sync group create --group-id=mygroup1 --status=enabled", "radosgw-admin bucket sync run", "radosgw-admin bucket sync run", "radosgw-admin sync group modify --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled | allowed | forbidden", "radosgw-admin sync group modify --group-id=mygroup1 --status=forbidden", "radosgw-admin bucket sync run", "radosgw-admin bucket sync run", "radosgw-admin sync group get --bucket= BUCKET_NAME --group-id= GROUP_ID", "radosgw-admin sync group get --group-id=mygroup", "radosgw-admin sync group remove --bucket= BUCKET_NAME --group-id= GROUP_ID", "radosgw-admin sync group remove --group-id=mygroup", "radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE", "radosgw-admin sync group flow create --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2", "radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE --dest-zone= DESTINATION_ZONE", "radosgw-admin sync group flow remove --bucket= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2", "radosgw-admin sync group flow remove --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=symmetrical --zones= ZONE_NAME1 , ZONE_NAME2", "radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET --dest-bucket-id= DESTINATION_BUCKET_ID --prefix= SOURCE_PREFIX --prefix-rm --tags-add= KEY1=VALUE1 , KEY2=VALUE2 ,.. --tags-rm= KEY1=VALUE1 , KEY2=VALUE2 , ... --dest-owner= OWNER_ID --storage-class= STORAGE_CLASS --mode= USER --uid= USER_ID", "radosgw-admin sync group pipe modify --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET1 --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET1 --dest-bucket-id=_DESTINATION_BUCKET-ID", "radosgw-admin sync group pipe modify --group-id=zonegroup --pipe-id=pipe --dest-zones='primary','secondary','tertiary' --source-zones='primary','secondary','tertiary' --source-bucket=pri-bkt-1 --dest-bucket=pri-bkt-1", "radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' ZONE_NAME ',' ZONE_NAME2 '... --source-bucket= SOURCE_BUCKET , --source-bucket-id= SOURCE_BUCKET_ID --dest-zones=' ZONE_NAME ',' ZONE_NAME2 '... --dest-bucket= DESTINATION_BUCKET --dest-bucket-id= DESTINATION_BUCKET-ID", "radosgw-admin sync group pipe remove --group-id=zonegroup --pipe-id=pipe --dest-zones='primary','secondary','tertiary' --source-zones='primary','secondary','tertiary' --source-bucket=pri-bkt-1 --dest-bucket=pri-bkt-1", "radosgw-admin sync group pipe remove --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID", "radosgw-admin sync group pipe remove -bucket-name=mybuck --group-id=zonegroup --pipe-id=pipe", "radosgw-admin sync info --bucket= BUCKET_NAME --effective-zone-name= ZONE_NAME", "radosgw-admin sync info", "radosgw-admin sync group create --group-id=group1 --status=allowed", "radosgw-admin sync group flow create --group-id=group1 --flow-id=flow-mirror --flow-type=symmetrical --zones=us-east,us-west", "radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='*' --source-bucket='*' --dest-zones='*' --dest-bucket='*'", "radosgw-admin sync group modify --group-id=group1 --status=enabled", "radosgw-admin period update --commit", "radosgw-admin sync info -bucket buck { \"sources\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck:115b12b3-....4409.1\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck:115b12b3-....4409.1\" }, } ], }", "radosgw-admin sync group create --group-id= GROUP_ID --status=allowed", "radosgw-admin sync group create --group-id=group1 --status=allowed", "radosgw-admin sync group flow create --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE_NAME --dest-zone= DESTINATION_ZONE_NAME", "radosgw-admin sync group flow create --group-id=group1 --flow-id=us-west-backup --flow-type=directional --source-zone=us-west --dest-zone=us-west-2", "radosgw-admin sync group pipe create --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones=' SOURCE_ZONE_NAME ' --dest-zones=' DESTINATION_ZONE_NAME '", "radosgw-admin sync group pipe create --group-id=group1 --pipe-id=pipe1 --source-zones='us-west' --dest-zones='us-west-2'", "radosgw-admin period update --commit", "radosgw-admin sync info", "radosgw-admin sync group create --group-id= GROUP_ID --status=allowed --bucket= BUCKET_NAME", "radosgw-admin sync group create --group-id=group1 --status=allowed --bucket=buck", "radosgw-admin sync group flow create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --flow-id= FLOW_ID --flow-type=directional --source-zone= SOURCE_ZONE_NAME --dest-zone= DESTINATION_ZONE_NAME", "radosgw-admin sync group flow create --bucket-name=buck --group-id=group1 --flow-id=us-west-backup --flow-type=directional --source-zone=us-west --dest-zone=us-west-2", "radosgw-admin sync group pipe create --group-id= GROUP_ID --bucket-name= BUCKET_NAME --pipe-id= PIPE_ID --source-zones=' SOURCE_ZONE_NAME ' --dest-zones=' DESTINATION_ZONE_NAME '", "radosgw-admin sync group pipe create --group-id=group1 --bucket-name=buck --pipe-id=pipe1 --source-zones='us-west' --dest-zones='us-west-2'", "radosgw-admin sync info --bucket-name= BUCKET_NAME", "radosgw-admin sync group modify --group-id=group1 --status=allowed", "radosgw-admin period update --commit", "radosgw-admin sync group create --bucket=buck --group-id=buck-default --status=enabled", "radosgw-admin sync group pipe create --bucket=buck --group-id=buck-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*'", "radosgw-admin bucket sync info --bucket buck realm 33157555-f387-44fc-b4b4-3f9c0b32cd66 (india) zonegroup 594f1f63-de6f-4e1e-90b6-105114d7ad55 (shared) zone ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5 (primary) bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1] source zone e0e75beb-4e28-45ff-8d48-9710de06dcd0 bucket :buck[ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1]", "radosgw-admin sync info --bucket buck { \"id\": \"pipe1\", \"source\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"primary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"dest\": { \"zone\": \"secondary\", \"bucket\": \"buck:ffaa5ba4-c1bd-4c17-b176-2fe34004b4c5.16191.1\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"\" } }", "radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled", "radosgw-admin sync group create --bucket=buck4 --group-id=buck4-default --status=enabled", "radosgw-admin sync group pipe create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones= SOURCE_ZONE_NAME --source-bucket= SOURCE_BUCKET_NAME --dest-zones= DESTINATION_ZONE_NAME", "radosgw-admin sync group pipe create --bucket=buck4 --group-id=buck4-default --pipe-id=pipe1 --source-zones='*' --source-bucket=buck5 --dest-zones='*'", "radosgw-admin sync group pipe modify --bucket=buck4 --group-id=buck4-default --pipe-id=pipe1 --source-zones=us-west --source-bucket=buck5 --dest-zones='*'", "radosgw-admin sync info --bucket-name= BUCKET_NAME", "radosgw-admin sync info --bucket=buck4 { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [ \"buck4:115b12b3-....14433.2\" ] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck5\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck4:115b12b3-....14433.2\" }, }, { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck5\" }, \"dest\": { \"zone\": \"us-west-2\", \"bucket\": \"buck4:115b12b3-....14433.2\" }, } ] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] }", "radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled", "radosgw-admin sync group create --bucket=buck6 --group-id=buck6-default --status=enabled", "radosgw-admin sync group pipe create --bucket-name= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --source-zones= SOURCE_ZONE_NAME --dest-zones= DESTINATION_ZONE_NAME --dest-bucket= DESTINATION_BUCKET_NAME", "radosgw-admin sync group pipe create --bucket=buck6 --group-id=buck6-default --pipe-id=pipe1 --source-zones='*' --dest-zones='*' --dest-bucket=buck5", "radosgw-admin sync group pipe modify --bucket=buck6 --group-id=buck6-default --pipe-id=pipe1 --source-zones='*' --dest-zones='us-west' --dest-bucket=buck5", "radosgw-admin sync info --bucket-name= BUCKET_NAME", "radosgw-admin sync info --bucket buck5 { \"sources\": [], \"dests\": [ { \"id\": \"pipe1\", \"source\": { \"zone\": \"us-west\", \"bucket\": \"buck6:c7887c5b-f6ff-4d5f-9736-aa5cdb4a15e8.20493.4\" }, \"dest\": { \"zone\": \"us-east\", \"bucket\": \"buck5\" }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", \"user\": \"s3cmd\" } }, ], \"hints\": { \"sources\": [], \"dests\": [ \"buck5\" ] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } }", "radosgw-admin sync group create --bucket= BUCKET_NAME --group-id= GROUP_ID --status=enabled", "radosgw-admin sync group create --bucket=buck1 --group-id=buck8-default --status=enabled", "radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --tags-add= KEY1 = VALUE1 , KEY2 = VALUE2 --source-zones=' ZONE_NAME1 ',' ZONE_NAME2 ' --dest-zones=' ZONE_NAME1 ',' ZONE_NAME2 '", "radosgw-admin sync group pipe create --bucket=buck1 --group-id=buck1-default --pipe-id=pipe-tags --tags-add=color=blue,color=red --source-zones='*' --dest-zones='*'", "radosgw-admin sync group pipe create --bucket= BUCKET_NAME --group-id= GROUP_ID --pipe-id= PIPE_ID --prefix= PREFIX --source-zones=' ZONE_NAME1 ',' ZONE_NAME2 ' --dest-zones=' ZONE_NAME1 ',' ZONE_NAME2 '", "radosgw-admin sync group pipe create --bucket=buck1 --group-id=buck1-default --pipe-id=pipe-prefix --prefix=foo/ --source-zones='*' --dest-zones='*' \\", "radosgw-admin sync info --bucket= BUCKET_NAME", "radosgw-admin sync info --bucket=buck1", "radosgw-admin sync group modify --group-id buck-default --status forbidden --bucket buck { \"groups\": [ { \"id\": \"buck-default\", \"data_flow\": {}, \"pipes\": [ { \"id\": \"pipe1\", \"source\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"dest\": { \"bucket\": \"*\", \"zones\": [ \"*\" ] }, \"params\": { \"source\": { \"filter\": { \"tags\": [] } }, \"dest\": {}, \"priority\": 0, \"mode\": \"system\", } } ], \"status\": \"forbidden\" } ] }", "radosgw-admin sync info --bucket buck { \"sources\": [], \"dests\": [], \"hints\": { \"sources\": [], \"dests\": [] }, \"resolved-hints-1\": { \"sources\": [], \"dests\": [] }, \"resolved-hints\": { \"sources\": [], \"dests\": [] } }", "radosgw-admin sync group pipe create --bucket-name=BUCKET_NAME --group-id=GROUP_ID --pipe-id=PIPE_ID --source-zones=SOURCE_ZONE_NAME --dest-zones=DESTINATION_ZONE_NAME --dest-bucket=DESTINATION_BUCKET_NAME --storage-class=STORAGE_CLASS", "radosgw-admin sync group create --bucket=buck10 --group-id=buck10-default --status=enabled radosgw-admin sync group pipe create --bucket=buck10 --group-id=buck10-default --pipe-id=pipe-storage-class --source-zones='*' --dest-zones=us-west-2 --storage-class=CHEAP_AND_SLOW", "radosgw-admin sync group pipe create --bucket-name=BUCKET_NAME --group-id=GROUP_ID --pipe-id=PIPE_ID --source-zones=SOURCE_ZONE_NAME --dest-zones=DESTINATION_ZONE_NAME --dest-owner=DESTINATION_OWNER --dest-bucket=DESTINATION_BUCKET_NAME", "radosgw-admin sync group create --bucket=buck11 --group-id=buck11-default --status=enabled radosgw-admin sync group pipe create --bucket=buck11 --group-id=buck11-default --pipe-id=pipe-dest-owner --source-zones='*' --dest-zones='*' --dest-bucket=buck12 --dest-owner=joe", "radosgw-admin sync group pipe create --bucket-name=BUCKET_NAME --group-id=GROUP_ID --pipe-id=PIPE_ID --source-zones=SOURCE_ZONE_NAME --dest-zones=DESTINATION_ZONE_NAME bucket=DESTINATION_BUCKET_NAME --mode=user --uid=UID_OF_USER", "radosgw-admin sync group pipe modify --bucket=buck11 --group-id=buck11-default --pipe-id=pipe-dest-owner --mode=user --uid=jenny", "radosgw-admin realm create --rgw-realm= REALM_NAME", "radosgw-admin realm create --rgw-realm=test_realm", "radosgw-admin realm default --rgw-realm= REALM_NAME", "radosgw-admin realm default --rgw-realm=test_realm1", "radosgw-admin realm default --rgw-realm=test_realm", "radosgw-admin realm delete --rgw-realm= REALM_NAME", "radosgw-admin realm delete --rgw-realm=test_realm", "radosgw-admin realm get --rgw-realm= REALM_NAME", "radosgw-admin realm get --rgw-realm=test_realm >filename.json", "{ \"id\": \"0a68d52e-a19c-4e8e-b012-a8f831cb3ebc\", \"name\": \"test_realm\", \"current_period\": \"b0c5bbef-4337-4edd-8184-5aeab2ec413b\", \"epoch\": 1 }", "radosgw-admin realm set --rgw-realm= REALM_NAME --infile= IN_FILENAME", "radosgw-admin realm set --rgw-realm=test_realm --infile=filename.json", "radosgw-admin realm list", "radosgw-admin realm list-periods", "radosgw-admin realm pull --url= URL_TO_MASTER_ZONE_GATEWAY --access-key= ACCESS_KEY --secret= SECRET_KEY", "radosgw-admin realm rename --rgw-realm= REALM_NAME --realm-new-name= NEW_REALM_NAME", "radosgw-admin realm rename --rgw-realm=test_realm --realm-new-name=test_realm2", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "radosgw-admin zonegroup create --rgw-zonegroup= ZONE_GROUP_NAME [--rgw-realm= REALM_NAME ] [--master]", "radosgw-admin zonegroup create --rgw-zonegroup=zonegroup1 --rgw-realm=test_realm --default", "zonegroup modify --rgw-zonegroup= ZONE_GROUP_NAME", "radosgw-admin zonegroup modify --rgw-zonegroup=zonegroup1", "radosgw-admin zonegroup default --rgw-zonegroup= ZONE_GROUP_NAME", "radosgw-admin zonegroup default --rgw-zonegroup=zonegroup2", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "radosgw-admin zonegroup default --rgw-zonegroup=us", "radosgw-admin period update --commit", "radosgw-admin zonegroup add --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME", "radosgw-admin period update --commit", "radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME", "radosgw-admin period update --commit", "radosgw-admin zonegroup rename --rgw-zonegroup= ZONE_GROUP_NAME --zonegroup-new-name= NEW_ZONE_GROUP_NAME", "radosgw-admin period update --commit", "radosgw-admin zonegroup delete --rgw-zonegroup= ZONE_GROUP_NAME", "radosgw-admin period update --commit", "radosgw-admin zonegroup list", "{ \"default_info\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"zonegroups\": [ \"us\" ] }", "radosgw-admin zonegroup get [--rgw-zonegroup= ZONE_GROUP_NAME ]", "radosgw-admin zonegroup set --infile zonegroup.json", "radosgw-admin period update --commit", "{ \"zonegroups\": [ { \"key\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"val\": { \"id\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"name\": \"us\", \"api_name\": \"us\", \"is_master\": \"true\", \"endpoints\": [ \"http:\\/\\/rgw1:80\" ], \"hostnames\": [], \"hostnames_s3website\": [], \"master_zone\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"zones\": [ { \"id\": \"9248cab2-afe7-43d8-a661-a40bf316665e\", \"name\": \"us-east\", \"endpoints\": [ \"http:\\/\\/rgw1\" ], \"log_meta\": \"true\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" }, { \"id\": \"d1024e59-7d28-49d1-8222-af101965a939\", \"name\": \"us-west\", \"endpoints\": [ \"http:\\/\\/rgw2:80\" ], \"log_meta\": \"false\", \"log_data\": \"true\", \"bucket_index_max_shards\": 11, \"read_only\": \"false\" } ], \"placement_targets\": [ { \"name\": \"default-placement\", \"tags\": [] } ], \"default_placement\": \"default-placement\", \"realm_id\": \"ae031368-8715-4e27-9a99-0c9468852cfe\" } } ], \"master_zonegroup\": \"90b28698-e7c3-462c-a42d-4aa780d24eda\", \"bucket_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 }, \"user_quota\": { \"enabled\": false, \"max_size_kb\": -1, \"max_objects\": -1 } }", "radosgw-admin zonegroup-map set --infile zonegroupmap.json", "radosgw-admin period update --commit", "radosgw-admin zone create --rgw-zone= ZONE_NAME [--zonegroup= ZONE_GROUP_NAME ] [--endpoints= ENDPOINT_PORT [,<endpoint:port>] [--master] [--default] --access-key ACCESS_KEY --secret SECRET_KEY", "radosgw-admin period update --commit", "radosgw-admin period update --commit", "radosgw-admin zonegroup remove --rgw-zonegroup= ZONE_GROUP_NAME --rgw-zone= ZONE_NAME", "radosgw-admin period update --commit", "radosgw-admin zone delete --rgw-zone= ZONE_NAME", "radosgw-admin period update --commit", "ceph osd pool delete DELETED_ZONE_NAME .rgw.control DELETED_ZONE_NAME .rgw.control --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.data.root DELETED_ZONE_NAME .rgw.data.root --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.log DELETED_ZONE_NAME .rgw.log --yes-i-really-really-mean-it ceph osd pool delete DELETED_ZONE_NAME .rgw.users.uid DELETED_ZONE_NAME .rgw.users.uid --yes-i-really-really-mean-it", "radosgw-admin zone modify [options] --access-key=<key> --secret/--secret-key=<key> --master --default --endpoints=<list>", "radosgw-admin period update --commit", "radosgw-admin zone list", "radosgw-admin zone get [--rgw-zone= ZONE_NAME ]", "radosgw-admin zone set --rgw-zone=test-zone --infile zone.json", "radosgw-admin period update --commit", "radosgw-admin zone rename --rgw-zone= ZONE_NAME --zone-new-name= NEW_ZONE_NAME", "radosgw-admin period update --commit" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/object_gateway_guide/multisite-configuration-and-administration
Chapter 13. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
Chapter 13. Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates In OpenShift Container Platform version 4.13, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide. One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company's policies. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 13.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 13.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 13.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 13.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 13.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 13.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 13.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 13.3.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 13.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 13.3.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 13.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 13.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 13.4. Required AWS infrastructure components To install OpenShift Container Platform on user-provisioned infrastructure in Amazon Web Services (AWS), you must manually create both the machines and their supporting infrastructure. For more information about the integration testing for different platforms, see the OpenShift Container Platform 4.x Tested Integrations page. By using the provided CloudFormation templates, you can create stacks of AWS resources that represent the following components: An AWS Virtual Private Cloud (VPC) Networking and load balancing components Security groups and roles An OpenShift Container Platform bootstrap node OpenShift Container Platform control plane nodes An OpenShift Container Platform compute node Alternatively, you can manually create the components or you can reuse existing infrastructure that meets the cluster requirements. Review the CloudFormation templates for more details about how the components interrelate. 13.4.1. Other infrastructure components A VPC DNS entries Load balancers (classic or network) and listeners A public and a private Route 53 zone Security groups IAM roles S3 buckets If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. Required DNS and load balancing components Your DNS and load balancer configuration needs to use a public hosted zone and can use a private hosted zone similar to the one that the installation program uses if it provisions the cluster's infrastructure. You must create a DNS entry that resolves to your load balancer. An entry for api.<cluster_name>.<domain> must point to the external load balancer, and an entry for api-int.<cluster_name>.<domain> must point to the internal load balancer. The cluster also requires load balancers and listeners for port 6443, which are required for the Kubernetes API and its extensions, and port 22623, which are required for the Ignition config files for new machines. The targets will be the control plane nodes. Port 6443 must be accessible to both clients external to the cluster and nodes within the cluster. Port 22623 must be accessible to nodes within the cluster. Component AWS type Description DNS AWS::Route53::HostedZone The hosted zone for your internal DNS. Public load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your public subnets. External API server record AWS::Route53::RecordSetGroup Alias records for the external API server. External listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the external load balancer. External target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the external load balancer. Private load balancer AWS::ElasticLoadBalancingV2::LoadBalancer The load balancer for your private subnets. Internal API server record AWS::Route53::RecordSetGroup Alias records for the internal API server. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 22623 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Internal listener AWS::ElasticLoadBalancingV2::Listener A listener on port 6443 for the internal load balancer. Internal target group AWS::ElasticLoadBalancingV2::TargetGroup The target group for the internal load balancer. Security groups The control plane and worker machines require access to the following ports: Group Type IP Protocol Port range MasterSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 tcp 6443 tcp 22623 WorkerSecurityGroup AWS::EC2::SecurityGroup icmp 0 tcp 22 BootstrapSecurityGroup AWS::EC2::SecurityGroup tcp 22 tcp 19531 Control plane Ingress The control plane machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range MasterIngressEtcd etcd tcp 2379 - 2380 MasterIngressVxlan Vxlan packets udp 4789 MasterIngressWorkerVxlan Vxlan packets udp 4789 MasterIngressInternal Internal cluster communication and Kubernetes proxy metrics tcp 9000 - 9999 MasterIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 MasterIngressKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressWorkerKube Kubernetes kubelet, scheduler and controller manager tcp 10250 - 10259 MasterIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 MasterIngressGeneve Geneve packets udp 6081 MasterIngressWorkerGeneve Geneve packets udp 6081 MasterIngressIpsecIke IPsec IKE packets udp 500 MasterIngressWorkerIpsecIke IPsec IKE packets udp 500 MasterIngressIpsecNat IPsec NAT-T packets udp 4500 MasterIngressWorkerIpsecNat IPsec NAT-T packets udp 4500 MasterIngressIpsecEsp IPsec ESP packets 50 All MasterIngressWorkerIpsecEsp IPsec ESP packets 50 All MasterIngressInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressWorkerInternalUDP Internal cluster communication udp 9000 - 9999 MasterIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 MasterIngressWorkerIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Worker Ingress The worker machines require the following Ingress groups. Each Ingress group is a AWS::EC2::SecurityGroupIngress resource. Ingress group Description IP protocol Port range WorkerIngressVxlan Vxlan packets udp 4789 WorkerIngressWorkerVxlan Vxlan packets udp 4789 WorkerIngressInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressWorkerInternal Internal cluster communication tcp 9000 - 9999 WorkerIngressKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressWorkerKube Kubernetes kubelet, scheduler, and controller manager tcp 10250 WorkerIngressIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressWorkerIngressServices Kubernetes Ingress services tcp 30000 - 32767 WorkerIngressGeneve Geneve packets udp 6081 WorkerIngressMasterGeneve Geneve packets udp 6081 WorkerIngressIpsecIke IPsec IKE packets udp 500 WorkerIngressMasterIpsecIke IPsec IKE packets udp 500 WorkerIngressIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressMasterIpsecNat IPsec NAT-T packets udp 4500 WorkerIngressIpsecEsp IPsec ESP packets 50 All WorkerIngressMasterIpsecEsp IPsec ESP packets 50 All WorkerIngressInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressMasterInternalUDP Internal cluster communication udp 9000 - 9999 WorkerIngressIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 WorkerIngressMasterIngressServicesUDP Kubernetes Ingress services udp 30000 - 32767 Roles and instance profiles You must grant the machines permissions in AWS. The provided CloudFormation templates grant the machines Allow permissions for the following AWS::IAM::Role objects and provide a AWS::IAM::InstanceProfile for each set of roles. If you do not use the templates, you can grant the machines the following broad permissions or the following individual permissions. Role Effect Action Resource Master Allow ec2:* * Allow elasticloadbalancing:* * Allow iam:PassRole * Allow s3:GetObject * Worker Allow ec2:Describe* * Bootstrap Allow ec2:Describe* * Allow ec2:AttachVolume * Allow ec2:DetachVolume * 13.4.2. Cluster machines You need AWS::EC2::Instance objects for the following machines: A bootstrap machine. This machine is required during installation, but you can remove it after your cluster deploys. Three control plane machines. The control plane machines are not governed by a control plane machine set. Compute machines. You must create at least two compute machines, which are also known as worker machines, during installation. These machines are not governed by a compute machine set. 13.4.3. Required AWS permissions for the IAM user Note Your IAM user must have the permission tag:GetResources in the region us-east-1 to delete the base cluster resources. As part of the AWS API requirement, the OpenShift Container Platform installation program performs various actions in this region. When you attach the AdministratorAccess policy to the IAM user that you create in Amazon Web Services (AWS), you grant that user all of the required permissions. To deploy all components of an OpenShift Container Platform cluster, the IAM user requires the following permissions: Example 13.3. Required EC2 permissions for installation ec2:AuthorizeSecurityGroupEgress ec2:AuthorizeSecurityGroupIngress ec2:CopyImage ec2:CreateNetworkInterface ec2:AttachNetworkInterface ec2:CreateSecurityGroup ec2:CreateTags ec2:CreateVolume ec2:DeleteSecurityGroup ec2:DeleteSnapshot ec2:DeleteTags ec2:DeregisterImage ec2:DescribeAccountAttributes ec2:DescribeAddresses ec2:DescribeAvailabilityZones ec2:DescribeDhcpOptions ec2:DescribeImages ec2:DescribeInstanceAttribute ec2:DescribeInstanceCreditSpecifications ec2:DescribeInstances ec2:DescribeInstanceTypes ec2:DescribeInternetGateways ec2:DescribeKeyPairs ec2:DescribeNatGateways ec2:DescribeNetworkAcls ec2:DescribeNetworkInterfaces ec2:DescribePrefixLists ec2:DescribeRegions ec2:DescribeRouteTables ec2:DescribeSecurityGroups ec2:DescribeSubnets ec2:DescribeTags ec2:DescribeVolumes ec2:DescribeVpcAttribute ec2:DescribeVpcClassicLink ec2:DescribeVpcClassicLinkDnsSupport ec2:DescribeVpcEndpoints ec2:DescribeVpcs ec2:GetEbsDefaultKmsKeyId ec2:ModifyInstanceAttribute ec2:ModifyNetworkInterfaceAttribute ec2:RevokeSecurityGroupEgress ec2:RevokeSecurityGroupIngress ec2:RunInstances ec2:TerminateInstances Example 13.4. Required permissions for creating network resources during installation ec2:AllocateAddress ec2:AssociateAddress ec2:AssociateDhcpOptions ec2:AssociateRouteTable ec2:AttachInternetGateway ec2:CreateDhcpOptions ec2:CreateInternetGateway ec2:CreateNatGateway ec2:CreateRoute ec2:CreateRouteTable ec2:CreateSubnet ec2:CreateVpc ec2:CreateVpcEndpoint ec2:ModifySubnetAttribute ec2:ModifyVpcAttribute Note If you use an existing VPC, your account does not require these permissions for creating network resources. Example 13.5. Required Elastic Load Balancing permissions (ELB) for installation elasticloadbalancing:AddTags elasticloadbalancing:ApplySecurityGroupsToLoadBalancer elasticloadbalancing:AttachLoadBalancerToSubnets elasticloadbalancing:ConfigureHealthCheck elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateLoadBalancerListeners elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterInstancesFromLoadBalancer elasticloadbalancing:DescribeInstanceHealth elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTags elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:RegisterInstancesWithLoadBalancer elasticloadbalancing:SetLoadBalancerPoliciesOfListener Example 13.6. Required Elastic Load Balancing permissions (ELBv2) for installation elasticloadbalancing:AddTags elasticloadbalancing:CreateListener elasticloadbalancing:CreateLoadBalancer elasticloadbalancing:CreateTargetGroup elasticloadbalancing:DeleteLoadBalancer elasticloadbalancing:DeregisterTargets elasticloadbalancing:DescribeListeners elasticloadbalancing:DescribeLoadBalancerAttributes elasticloadbalancing:DescribeLoadBalancers elasticloadbalancing:DescribeTargetGroupAttributes elasticloadbalancing:DescribeTargetHealth elasticloadbalancing:ModifyLoadBalancerAttributes elasticloadbalancing:ModifyTargetGroup elasticloadbalancing:ModifyTargetGroupAttributes elasticloadbalancing:RegisterTargets Example 13.7. Required IAM permissions for installation iam:AddRoleToInstanceProfile iam:CreateInstanceProfile iam:CreateRole iam:DeleteInstanceProfile iam:DeleteRole iam:DeleteRolePolicy iam:GetInstanceProfile iam:GetRole iam:GetRolePolicy iam:GetUser iam:ListInstanceProfilesForRole iam:ListRoles iam:ListUsers iam:PassRole iam:PutRolePolicy iam:RemoveRoleFromInstanceProfile iam:SimulatePrincipalPolicy iam:TagRole Note If you have not created a load balancer in your AWS account, the IAM user also requires the iam:CreateServiceLinkedRole permission. Example 13.8. Required Route 53 permissions for installation route53:ChangeResourceRecordSets route53:ChangeTagsForResource route53:CreateHostedZone route53:DeleteHostedZone route53:GetChange route53:GetHostedZone route53:ListHostedZones route53:ListHostedZonesByName route53:ListResourceRecordSets route53:ListTagsForResource route53:UpdateHostedZoneComment Example 13.9. Required S3 permissions for installation s3:CreateBucket s3:DeleteBucket s3:GetAccelerateConfiguration s3:GetBucketAcl s3:GetBucketCors s3:GetBucketLocation s3:GetBucketLogging s3:GetBucketPolicy s3:GetBucketObjectLockConfiguration s3:GetBucketReplication s3:GetBucketRequestPayment s3:GetBucketTagging s3:GetBucketVersioning s3:GetBucketWebsite s3:GetEncryptionConfiguration s3:GetLifecycleConfiguration s3:GetReplicationConfiguration s3:ListBucket s3:PutBucketAcl s3:PutBucketTagging s3:PutEncryptionConfiguration Example 13.10. S3 permissions that cluster Operators require s3:DeleteObject s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:GetObjectVersion s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Example 13.11. Required permissions to delete base cluster resources autoscaling:DescribeAutoScalingGroups ec2:DeletePlacementGroup ec2:DeleteNetworkInterface ec2:DeleteVolume elasticloadbalancing:DeleteTargetGroup elasticloadbalancing:DescribeTargetGroups iam:DeleteAccessKey iam:DeleteUser iam:ListAttachedRolePolicies iam:ListInstanceProfiles iam:ListRolePolicies iam:ListUserPolicies s3:DeleteObject s3:ListBucketVersions tag:GetResources Example 13.12. Required permissions to delete network resources ec2:DeleteDhcpOptions ec2:DeleteInternetGateway ec2:DeleteNatGateway ec2:DeleteRoute ec2:DeleteRouteTable ec2:DeleteSubnet ec2:DeleteVpc ec2:DeleteVpcEndpoints ec2:DetachInternetGateway ec2:DisassociateRouteTable ec2:ReleaseAddress ec2:ReplaceRouteTableAssociation Note If you use an existing VPC, your account does not require these permissions to delete network resources. Instead, your account only requires the tag:UntagResources permission to delete network resources. Example 13.13. Required permissions to delete a cluster with shared instance roles iam:UntagRole Example 13.14. Additional IAM and S3 permissions that are required to create manifests iam:DeleteAccessKey iam:DeleteUser iam:DeleteUserPolicy iam:GetUserPolicy iam:ListAccessKeys iam:PutUserPolicy iam:TagUser s3:PutBucketPublicAccessBlock s3:GetBucketPublicAccessBlock s3:PutLifecycleConfiguration s3:ListBucket s3:ListBucketMultipartUploads s3:AbortMultipartUpload Note If you are managing your cloud provider credentials with mint mode, the IAM user also requires the iam:CreateAccessKey and iam:CreateUser permissions. Example 13.15. Optional permissions for instance and quota checks for installation ec2:DescribeInstanceTypeOfferings servicequotas:ListAWSDefaultServiceQuotas 13.5. Obtaining an AWS Marketplace image If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy worker nodes. Prerequisites You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster. Procedure Complete the OpenShift Container Platform subscription from the AWS Marketplace . Record the AMI ID for your specific region. If you use the CloudFormation template to deploy your worker nodes, you must update the worker0.type.properties.ImageID parameter with this value. 13.6. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 13.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 13.8. Creating the installation files for AWS To install OpenShift Container Platform on Amazon Web Services (AWS) using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 13.8.1. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 13.8.2. Creating the installation configuration file Generate and customize the installation configuration file that the installation program needs to deploy your cluster. Prerequisites You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. You checked that you are deploying your cluster to a region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to a region that requires a custom AMI, such as an AWS GovCloud region, you must create the install-config.yaml file manually. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select aws as the platform to target. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Note The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". Optional: Back up the install-config.yaml file. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration. 13.8.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 13.8.4. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the control plane machine set: USD rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. Optional: If you manually created a cloud identity and access management (IAM) role, locate any CredentialsRequest objects with the TechPreviewNoUpgrade annotation in the release image by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name> Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-set: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. Delete all CredentialsRequest objects that have the TechPreviewNoUpgrade annotation. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 13.9. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services (AWS). The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 13.10. Creating a VPC in AWS You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "VpcCidr", 1 "ParameterValue": "10.0.0.0/16" 2 }, { "ParameterKey": "AvailabilityZoneCount", 3 "ParameterValue": "1" 4 }, { "ParameterKey": "SubnetBits", 5 "ParameterValue": "12" 6 } ] 1 The CIDR block for the VPC. 2 Specify a CIDR block in the format x.x.x.x/16-24 . 3 The number of availability zones to deploy the VPC in. 4 Specify an integer between 1 and 3 . 5 The size of each subnet in each availability zone. 6 Specify an integer between 5 and 13 , where 5 is /27 and 13 is /19 . Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-vpc . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: VpcId The ID of your VPC. PublicSubnetIds The IDs of the new public subnets. PrivateSubnetIds The IDs of the new private subnets. 13.10.1. CloudFormation template for the VPC You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster. Example 13.16. CloudFormation template for the VPC AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" MinValue: 1 MaxValue: 3 Default: 1 Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Network Configuration" Parameters: - VpcCidr - SubnetBits - Label: default: "Availability Zones" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: "Availability Zone Count" VpcCidr: default: "VPC CIDR" SubnetBits: default: "Bits Per Subnet" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: "AWS::EC2::VPC" Properties: EnableDnsSupport: "true" EnableDnsHostnames: "true" CidrBlock: !Ref VpcCidr PublicSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PublicSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" InternetGateway: Type: "AWS::EC2::InternetGateway" GatewayToInternet: Type: "AWS::EC2::VPCGatewayAttachment" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PublicRoute: Type: "AWS::EC2::Route" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: "AWS::EC2::Subnet" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable: Type: "AWS::EC2::RouteTable" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: "AWS::EC2::SubnetRouteTableAssociation" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Properties: AllocationId: "Fn::GetAtt": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: "AWS::EC2::EIP" Properties: Domain: vpc Route: Type: "AWS::EC2::Route" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: "AWS::EC2::Subnet" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable2: Type: "AWS::EC2::RouteTable" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz2 Properties: AllocationId: "Fn::GetAtt": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: "AWS::EC2::EIP" Condition: DoAz2 Properties: Domain: vpc Route2: Type: "AWS::EC2::Route" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: "AWS::EC2::Subnet" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref "AWS::Region" PrivateRouteTable3: Type: "AWS::EC2::RouteTable" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: "AWS::EC2::SubnetRouteTableAssociation" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: "AWS::EC2::NatGateway" Condition: DoAz3 Properties: AllocationId: "Fn::GetAtt": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: "AWS::EC2::EIP" Condition: DoAz3 Properties: Domain: vpc Route3: Type: "AWS::EC2::Route" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref "AWS::NoValue"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref "AWS::NoValue"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ ",", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PublicSubnet3, !Ref "AWS::NoValue"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ ",", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref "AWS::NoValue"], !If [DoAz3, !Ref PrivateSubnet3, !Ref "AWS::NoValue"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 13.11. Creating networking and load balancing components in AWS You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags. You can run the template multiple times within a single Virtual Private Cloud (VPC). Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command: USD aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1 1 For the <route53_domain> , specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Example output mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10 In the example output, the hosted zone ID is Z21IXYZABCZ2A4 . Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "ClusterName", 1 "ParameterValue": "mycluster" 2 }, { "ParameterKey": "InfrastructureName", 3 "ParameterValue": "mycluster-<random_string>" 4 }, { "ParameterKey": "HostedZoneId", 5 "ParameterValue": "<random_string>" 6 }, { "ParameterKey": "HostedZoneName", 7 "ParameterValue": "example.com" 8 }, { "ParameterKey": "PublicSubnets", 9 "ParameterValue": "subnet-<random_string>" 10 }, { "ParameterKey": "PrivateSubnets", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "VpcId", 13 "ParameterValue": "vpc-<random_string>" 14 } ] 1 A short, representative cluster name to use for hostnames, etc. 2 Specify the cluster name that you used when you generated the install-config.yaml file for the cluster. 3 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 4 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 5 The Route 53 public zone ID to register the targets with. 6 Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4 . You can obtain this value from the AWS console. 7 The Route 53 zone to register the targets with. 8 Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 9 The public subnets that you created for your VPC. 10 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 11 The private subnets that you created for your VPC. 12 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 13 The VPC that you created for the cluster. 14 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires. Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-dns . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: PrivateHostedZoneId Hosted zone ID for the private DNS. ExternalApiLoadBalancerName Full name of the external API load balancer. InternalApiLoadBalancerName Full name of the internal API load balancer. ApiServerDnsName Full hostname of the API server. RegisterNlbIpTargetsLambda Lambda ARN useful to help register/deregister IP targets for these load balancers. ExternalApiTargetGroupArn ARN of external API target group. InternalApiTargetGroupArn ARN of internal API target group. InternalServiceTargetGroupArn ARN of internal service target group. 13.11.1. CloudFormation template for the network and load balancers You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster. Example 13.17. CloudFormation template for the network and load balancers AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: "example.com" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - ClusterName - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: "DNS" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: "Cluster Name" InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" PublicSubnets: default: "Public Subnets" PrivateSubnets: default: "Private Subnets" HostedZoneName: default: "Public Hosted Zone Name" HostedZoneId: default: "Public Hosted Zone ID" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "ext"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join ["-", [!Ref InfrastructureName, "int"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: "AWS::Route53::HostedZone" Properties: HostedZoneConfig: Comment: "Managed by CloudFormation" Name: !Join [".", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join ["-", [!Ref InfrastructureName, "int"]] - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "owned" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref "AWS::Region" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ ".", ["api", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ ".", ["api-int", !Ref ClusterName, !Join ["", [!Ref HostedZoneName, "."]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/readyz" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: "/healthz" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "nlb", "lambda", "role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalApiTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref InternalServiceTargetGroup - Effect: "Allow" Action: [ "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:DeregisterTargets", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterTargetLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: "python3.8" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join ["-", [!Ref InfrastructureName, "subnet-tags-lambda-role"]] AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "lambda.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "subnet-tagging-policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: [ "ec2:DeleteTags", "ec2:CreateTags" ] Resource: "arn:aws:ec2:*:*:subnet/*" - Effect: "Allow" Action: [ "ec2:DescribeSubnets", "ec2:DescribeTags" ] Resource: "*" RegisterSubnetTags: Type: "AWS::Lambda::Function" Properties: Handler: "index.handler" Role: Fn::GetAtt: - "RegisterSubnetTagsLambdaIamRole" - "Arn" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: "python3.8" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [".", ["api-int", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup Important If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example: Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . You can view details about your hosted zones by navigating to the AWS Route 53 console . See Listing public hosted zones in the AWS documentation for more information about listing public hosted zones. 13.12. Creating security group and roles in AWS You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires. Note If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "VpcCidr", 3 "ParameterValue": "10.0.0.0/16" 4 }, { "ParameterKey": "PrivateSubnets", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "VpcId", 7 "ParameterValue": "vpc-<random_string>" 8 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 The CIDR block for the VPC. 4 Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24 . 5 The private subnets that you created for your VPC. 6 Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC. 7 The VPC that you created for the cluster. 8 Specify the VpcId value from the output of the CloudFormation template for the VPC. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-sec . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: MasterSecurityGroupId Master Security Group ID WorkerSecurityGroupId Worker Security Group ID MasterInstanceProfile Master IAM Instance Profile WorkerInstanceProfile Worker IAM Instance Profile 13.12.1. CloudFormation template for security objects You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster. Example 13.18. CloudFormation template for security objects AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Network Configuration" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" VpcCidr: default: "VPC CIDR" PrivateSubnets: default: "Private Subnets" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "master", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:AttachVolume" - "ec2:AuthorizeSecurityGroupIngress" - "ec2:CreateSecurityGroup" - "ec2:CreateTags" - "ec2:CreateVolume" - "ec2:DeleteSecurityGroup" - "ec2:DeleteVolume" - "ec2:Describe*" - "ec2:DetachVolume" - "ec2:ModifyInstanceAttribute" - "ec2:ModifyVolume" - "ec2:RevokeSecurityGroupIngress" - "elasticloadbalancing:AddTags" - "elasticloadbalancing:AttachLoadBalancerToSubnets" - "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer" - "elasticloadbalancing:CreateListener" - "elasticloadbalancing:CreateLoadBalancer" - "elasticloadbalancing:CreateLoadBalancerPolicy" - "elasticloadbalancing:CreateLoadBalancerListeners" - "elasticloadbalancing:CreateTargetGroup" - "elasticloadbalancing:ConfigureHealthCheck" - "elasticloadbalancing:DeleteListener" - "elasticloadbalancing:DeleteLoadBalancer" - "elasticloadbalancing:DeleteLoadBalancerListeners" - "elasticloadbalancing:DeleteTargetGroup" - "elasticloadbalancing:DeregisterInstancesFromLoadBalancer" - "elasticloadbalancing:DeregisterTargets" - "elasticloadbalancing:Describe*" - "elasticloadbalancing:DetachLoadBalancerFromSubnets" - "elasticloadbalancing:ModifyListener" - "elasticloadbalancing:ModifyLoadBalancerAttributes" - "elasticloadbalancing:ModifyTargetGroup" - "elasticloadbalancing:ModifyTargetGroupAttributes" - "elasticloadbalancing:RegisterInstancesWithLoadBalancer" - "elasticloadbalancing:RegisterTargets" - "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer" - "elasticloadbalancing:SetLoadBalancerPoliciesOfListener" - "kms:DescribeKey" Resource: "*" MasterInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "MasterIamRole" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "worker", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: - "ec2:DescribeInstances" - "ec2:DescribeRegions" Resource: "*" WorkerInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Roles: - Ref: "WorkerIamRole" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 13.13. Accessing RHCOS AMIs with stream metadata In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation. You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format. For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI. Procedure To parse the stream metadata, use one of the following methods: From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go . You can also view example code in the library. From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language. From a command-line utility that handles JSON data, such as jq : Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1 : For x86_64 USD openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image' Example output ami-0d3e625f84626bbda For aarch64 USD openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image' Example output ami-0af1d3b7fa5be2131 The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster. 13.14. RHCOS AMIs for the AWS infrastructure Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes. Note By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI. Table 13.3. x86_64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-052b3e6b060b5595d ap-east-1 ami-09c502968481ee218 ap-northeast-1 ami-06b1dbe049e3c1d23 ap-northeast-2 ami-08add6eb5aa1c8639 ap-northeast-3 ami-0af4dfc64506fe20e ap-south-1 ami-09b1532dd3d63fdc0 ap-south-2 ami-0a915cedf8558e600 ap-southeast-1 ami-0c914fd7a50130c9e ap-southeast-2 ami-04b54199f4be0ec9d ap-southeast-3 ami-0be3ee78b9a3fdf07 ap-southeast-4 ami-00a44d7d5054bb5f8 ca-central-1 ami-0bb1fd49820ea09ae eu-central-1 ami-03d9cb166a11c9b8a eu-central-2 ami-089865c640f876630 eu-north-1 ami-0e94d896e72eeae0d eu-south-1 ami-04df4e2850dce0721 eu-south-2 ami-0d80de3a5ba722545 eu-west-1 ami-066f2d86026ef97a8 eu-west-2 ami-0f1c0b26b1c99499d eu-west-3 ami-0f639505a9c74d9a2 me-central-1 ami-0fbb2ece8478f1402 me-south-1 ami-01507551558853852 sa-east-1 ami-097132aa0da53c981 us-east-1 ami-0624891c612b5eaa0 us-east-2 ami-0dc6c4d1bd5161f13 us-gov-east-1 ami-0bab20368b3b9b861 us-gov-west-1 ami-0fe8299f8e808e720 us-west-1 ami-0c03b7e5954f10f9b us-west-2 ami-0f4cdfd74e4a3fc29 Table 13.4. aarch64 RHCOS AMIs AWS zone AWS AMI af-south-1 ami-0d684ca7c09e6f5fc ap-east-1 ami-01b0e1c24d180fe5d ap-northeast-1 ami-06439c626e2663888 ap-northeast-2 ami-0a19d3bed3a2854e3 ap-northeast-3 ami-08b8fa76fd46b5c58 ap-south-1 ami-0ec6463b788929a6a ap-south-2 ami-0f5077b6d7e1b10a5 ap-southeast-1 ami-081a6c6a24e2ee453 ap-southeast-2 ami-0a70049ac02157a02 ap-southeast-3 ami-065fd6311a9d7e6a6 ap-southeast-4 ami-0105993dc2508c4f4 ca-central-1 ami-04582d73d5aad9a85 eu-central-1 ami-0f72c8b59213f628e eu-central-2 ami-0647f43516c31119c eu-north-1 ami-0d155ca6a531f5f72 eu-south-1 ami-02f8d2794a663dbd0 eu-south-2 ami-0427659985f520cae eu-west-1 ami-04e9944a8f9761c3e eu-west-2 ami-09c701f11d9a7b167 eu-west-3 ami-02cd8181243610e0d me-central-1 ami-03008d03f133e6ec0 me-south-1 ami-096bc3b4ec0faad76 sa-east-1 ami-01f9b5a4f7b8c50a1 us-east-1 ami-09ea6f8f7845792e1 us-east-2 ami-039cdb2bf3b5178da us-gov-east-1 ami-0fed54a5ab75baed0 us-gov-west-1 ami-0fc5be5af4bb1d79f us-west-1 ami-018e5407337da1062 us-west-2 ami-0c0c67ef81b80e8eb 13.14.1. AWS regions without a published RHCOS AMI You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster. If you are deploying to a region not supported by the AWS SDK and you do not specify a custom AMI, the installation program copies the us-east-1 AMI to the user account automatically. Then the installation program creates the control plane machines with encrypted EBS volumes using the default or user-specified Key Management Service (KMS) key. This allows the AMI to follow the same process workflow as published RHCOS AMIs. A region without native support for an RHCOS AMI is not available to select from the terminal during cluster creation because it is not published. However, you can install to this region by configuring the custom AMI in the install-config.yaml file. 13.14.2. Uploading a custom RHCOS AMI in AWS If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region. Prerequisites You configured an AWS account. You created an Amazon S3 bucket with the required IAM service role . You uploaded your RHCOS VMDK file to Amazon S3. The RHCOS VMDK file must be the highest version that is less than or equal to the OpenShift Container Platform version you are installing. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer . Procedure Export your AWS profile as an environment variable: USD export AWS_PROFILE=<aws_profile> 1 Export the region to associate with your custom AMI as an environment variable: USD export AWS_DEFAULT_REGION=<aws_region> 1 Export the version of RHCOS you uploaded to Amazon S3 as an environment variable: USD export RHCOS_VERSION=<version> 1 1 1 1 The RHCOS VMDK version, like 4.13.0 . Export the Amazon S3 bucket name as an environment variable: USD export VMIMPORT_BUCKET_NAME=<s3_bucket_name> Create the containers.json file and define your RHCOS VMDK file: USD cat <<EOF > containers.json { "Description": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "USD{VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF Import the RHCOS disk as an Amazon EBS snapshot: USD aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} \ --description "<description>" \ 1 --disk-container "file://<file_path>/containers.json" 2 1 The description of your RHCOS disk being imported, like rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64 . 2 The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key. Check the status of the image import: USD watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION} Example output { "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] } Copy the SnapshotId to register the image. Create a custom RHCOS AMI from the RHCOS snapshot: USD aws ec2 register-image \ --region USD{AWS_DEFAULT_REGION} \ --architecture x86_64 \ 1 --description "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 2 --ena-support \ --name "rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64" \ 3 --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4 1 The RHCOS VMDK architecture type, like x86_64 , aarch64 , s390x , or ppc64le . 2 The Description from the imported snapshot. 3 The name of the RHCOS AMI. 4 The SnapshotID from the imported snapshot. To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs . 13.15. Creating the bootstrap node in AWS You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by: Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires. Note If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. Procedure Create the bucket by running the following command: USD aws s3 mb s3://<cluster-name>-infra 1 1 <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster. You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. Providing your own custom endpoints. Upload the bootstrap.ign Ignition config file to the bucket by running the following command: USD aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify that the file uploaded by running the following command: USD aws s3 ls s3://<cluster-name>-infra/ Example output 2019-04-03 16:15:16 314878 bootstrap.ign Note The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach. Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AllowedBootstrapSshCidr", 5 "ParameterValue": "0.0.0.0/0" 6 }, { "ParameterKey": "PublicSubnet", 7 "ParameterValue": "subnet-<random_string>" 8 }, { "ParameterKey": "MasterSecurityGroupId", 9 "ParameterValue": "sg-<random_string>" 10 }, { "ParameterKey": "VpcId", 11 "ParameterValue": "vpc-<random_string>" 12 }, { "ParameterKey": "BootstrapIgnitionLocation", 13 "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 14 }, { "ParameterKey": "AutoRegisterELB", 15 "ParameterValue": "yes" 16 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 17 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 18 }, { "ParameterKey": "ExternalApiTargetGroupArn", 19 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 20 }, { "ParameterKey": "InternalApiTargetGroupArn", 21 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 22 }, { "ParameterKey": "InternalServiceTargetGroupArn", 23 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 24 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture. 4 Specify a valid AWS::EC2::Image::Id value. 5 CIDR block to allow SSH access to the bootstrap node. 6 Specify a CIDR block in the format x.x.x.x/16-24 . 7 The public subnet that is associated with your VPC to launch the bootstrap node into. 8 Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC. 9 The master security group ID (for registering temporary rules) 10 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 11 The VPC created resources will belong to. 12 Specify the VpcId value from the output of the CloudFormation template for the VPC. 13 Location to fetch bootstrap Ignition config file from. 14 Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign . 15 Whether or not to register a network load balancer (NLB). 16 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 17 The ARN for NLB IP target registration lambda group. 18 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 19 The ARN for external API load balancer target group. 20 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 21 The ARN for internal API load balancer target group. 22 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 23 The ARN for internal service load balancer target group. 24 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4 1 <name> is the name for the CloudFormation stack, such as cluster-bootstrap . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. 4 You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83 Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> After the StackStatus displays CREATE_COMPLETE , the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster: BootstrapInstanceId The bootstrap Instance ID. BootstrapPublicIp The bootstrap node public IP address. BootstrapPrivateIp The bootstrap node private IP address. 13.15.1. CloudFormation template for the bootstrap machine You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster. Example 13.19. CloudFormation template for the bootstrap machine AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: "i3.large" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" AllowedBootstrapSshCidr: default: "Allowed SSH Source" PublicSubnet: default: "Public Subnet" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Bootstrap Ignition Source" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Principal: Service: - "ec2.amazonaws.com" Action: - "sts:AssumeRole" Path: "/" Policies: - PolicyName: !Join ["-", [!Ref InfrastructureName, "bootstrap", "policy"]] PolicyDocument: Version: "2012-10-17" Statement: - Effect: "Allow" Action: "ec2:Describe*" Resource: "*" - Effect: "Allow" Action: "ec2:AttachVolume" Resource: "*" - Effect: "Allow" Action: "ec2:DetachVolume" Resource: "*" - Effect: "Allow" Action: "s3:GetObject" Resource: "*" BootstrapInstanceProfile: Type: "AWS::IAM::InstanceProfile" Properties: Path: "/" Roles: - Ref: "BootstrapIamRole" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "true" DeviceIndex: "0" GroupSet: - !Ref "BootstrapSecurityGroup" - !Ref "MasterSecurityGroupId" SubnetId: !Ref "PublicSubnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"replace":{"source":"USD{S3Loc}"}},"version":"3.1.0"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . See RHCOS AMIs for the AWS infrastructure for details about the Red Hat Enterprise Linux CoreOS (RHCOS) AMIs for the AWS zones. 13.16. Creating the control plane machines in AWS You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes. Important The CloudFormation template creates a stack that represents three control plane nodes. Note If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. Procedure Create a JSON file that contains the parameter values that the template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "AutoRegisterDNS", 5 "ParameterValue": "yes" 6 }, { "ParameterKey": "PrivateHostedZoneId", 7 "ParameterValue": "<random_string>" 8 }, { "ParameterKey": "PrivateHostedZoneName", 9 "ParameterValue": "mycluster.example.com" 10 }, { "ParameterKey": "Master0Subnet", 11 "ParameterValue": "subnet-<random_string>" 12 }, { "ParameterKey": "Master1Subnet", 13 "ParameterValue": "subnet-<random_string>" 14 }, { "ParameterKey": "Master2Subnet", 15 "ParameterValue": "subnet-<random_string>" 16 }, { "ParameterKey": "MasterSecurityGroupId", 17 "ParameterValue": "sg-<random_string>" 18 }, { "ParameterKey": "IgnitionLocation", 19 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 20 }, { "ParameterKey": "CertificateAuthorities", 21 "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 22 }, { "ParameterKey": "MasterInstanceProfileName", 23 "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 24 }, { "ParameterKey": "MasterInstanceType", 25 "ParameterValue": "" 26 }, { "ParameterKey": "AutoRegisterELB", 27 "ParameterValue": "yes" 28 }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 29 "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 30 }, { "ParameterKey": "ExternalApiTargetGroupArn", 31 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 32 }, { "ParameterKey": "InternalApiTargetGroupArn", 33 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 34 }, { "ParameterKey": "InternalServiceTargetGroupArn", 35 "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 36 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 Whether or not to perform DNS etcd registration. 6 Specify yes or no . If you specify yes , you must provide hosted zone information. 7 The Route 53 private zone ID to register the etcd targets with. 8 Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing. 9 The Route 53 zone to register the targets with. 10 Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. 11 13 15 A subnet, preferably private, to launch the control plane machines on. 12 14 16 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 17 The master security group ID to associate with control plane nodes. 18 Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 19 The location to fetch control plane Ignition config file from. 20 Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master . 21 The base64 encoded certificate authority string to use. 22 Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 23 The IAM profile to associate with control plane nodes. 24 Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 25 The type of AWS instance to use for the control plane machines based on your selected architecture. 26 The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64. 27 Whether or not to register a network load balancer (NLB). 28 Specify yes or no . If you specify yes , you must provide a Lambda Amazon Resource Name (ARN) value. 29 The ARN for NLB IP target registration lambda group. 30 Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 31 The ARN for external API load balancer target group. 32 Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 33 The ARN for internal API load balancer target group. 34 Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. 35 The ARN for internal service load balancer target group. 36 Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires. If you specified an m5 instance type as the value for MasterInstanceType , add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-control-plane . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b Note The CloudFormation template creates a stack that represents three control plane nodes. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> 13.16.1. CloudFormation template for control plane machines You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster. Example 13.20. CloudFormation template for control plane machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: "" Description: unused Type: String PrivateHostedZoneId: Default: "" Description: unused Type: String PrivateHostedZoneName: Default: "" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: "yes" AllowedValues: - "yes" - "no" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select "no" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: "Network Configuration" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: "Load Balancer Automation" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: "Infrastructure Name" VpcId: default: "VPC ID" Master0Subnet: default: "Master-0 Subnet" Master1Subnet: default: "Master-1 Subnet" Master2Subnet: default: "Master-2 Subnet" MasterInstanceType: default: "Master Instance Type" MasterInstanceProfileName: default: "Master Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" BootstrapIgnitionLocation: default: "Master Ignition Source" CertificateAuthorities: default: "Ignition CA String" MasterSecurityGroupId: default: "Master Security Group ID" AutoRegisterELB: default: "Use Provided ELB Automation" Conditions: DoRegistration: !Equals ["yes", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master0Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master1Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "MasterSecurityGroupId" SubnetId: !Ref "Master2Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ ",", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ] Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 13.17. Creating the worker nodes in AWS You can create worker nodes in Amazon Web Services (AWS) for your cluster to use. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node. Important The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node. Note If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. Procedure Create a JSON file that contains the parameter values that the CloudFormation template requires: [ { "ParameterKey": "InfrastructureName", 1 "ParameterValue": "mycluster-<random_string>" 2 }, { "ParameterKey": "RhcosAmi", 3 "ParameterValue": "ami-<random_string>" 4 }, { "ParameterKey": "Subnet", 5 "ParameterValue": "subnet-<random_string>" 6 }, { "ParameterKey": "WorkerSecurityGroupId", 7 "ParameterValue": "sg-<random_string>" 8 }, { "ParameterKey": "IgnitionLocation", 9 "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 10 }, { "ParameterKey": "CertificateAuthorities", 11 "ParameterValue": "" 12 }, { "ParameterKey": "WorkerInstanceProfileName", 13 "ParameterValue": "" 14 }, { "ParameterKey": "WorkerInstanceType", 15 "ParameterValue": "" 16 } ] 1 The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster. 2 Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string> . 3 Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture. 4 Specify an AWS::EC2::Image::Id value. 5 A subnet, preferably private, to start the worker nodes on. 6 Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing. 7 The worker security group ID to associate with worker nodes. 8 Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles. 9 The location to fetch the bootstrap Ignition config file from. 10 Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker . 11 Base64 encoded certificate authority string to use. 12 Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC... xYz== . 13 The IAM profile to associate with worker nodes. 14 Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles. 15 The type of AWS instance to use for the compute machines based on your selected architecture. 16 The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires. Optional: If you specified an m5 instance type as the value for WorkerInstanceType , add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription. Use the CloudFormation template to create a stack of AWS resources that represent a worker node: Important You must enter the command on a single line. USD aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \ 2 --parameters file://<parameters>.json 3 1 <name> is the name for the CloudFormation stack, such as cluster-worker-1 . You need the name of this stack if you remove the cluster. 2 <template> is the relative path to and name of the CloudFormation template YAML file that you saved. 3 <parameters> is the relative path to and name of the CloudFormation parameters JSON file. Example output arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59 Note The CloudFormation template creates a stack that represents one worker node. Confirm that the template components exist: USD aws cloudformation describe-stacks --stack-name <name> Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name. Important You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template. 13.17.1. CloudFormation template for worker machines You can use the following CloudFormation template to deploy the worker machines that you need for your OpenShift Container Platform cluster. Example 13.21. CloudFormation template for worker machines AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: "Cluster Information" Parameters: - InfrastructureName - Label: default: "Host Information" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: "Network Configuration" Parameters: - Subnet ParameterLabels: Subnet: default: "Subnet" InfrastructureName: default: "Infrastructure Name" WorkerInstanceType: default: "Worker Instance Type" WorkerInstanceProfileName: default: "Worker Instance Profile Name" RhcosAmi: default: "Red Hat Enterprise Linux CoreOS AMI ID" IgnitionLocation: default: "Worker Ignition Source" CertificateAuthorities: default: "Ignition CA String" WorkerSecurityGroupId: default: "Worker Security Group ID" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: "120" VolumeType: "gp2" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: "false" DeviceIndex: "0" GroupSet: - !Ref "WorkerSecurityGroupId" SubnetId: !Ref "Subnet" UserData: Fn::Base64: !Sub - '{"ignition":{"config":{"merge":[{"source":"USD{SOURCE}"}]},"security":{"tls":{"certificateAuthorities":[{"source":"USD{CA_BUNDLE}"}]}},"version":"3.1.0"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join ["", ["kubernetes.io/cluster/", !Ref InfrastructureName]] Value: "shared" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp Additional resources You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console . 13.18. Initializing the bootstrap sequence on AWS with user-provisioned infrastructure After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane. Prerequisites You configured an AWS account. You added your AWS keys and region to your local AWS profile by running aws configure . You generated the Ignition config files for your cluster. You created and configured a VPC and associated subnets in AWS. You created and configured DNS, load balancers, and listeners in AWS. You created the security groups and roles required for your cluster in AWS. You created the bootstrap machine. You created the control plane machines. You created the worker nodes. Procedure Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized. Note After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators. Additional resources See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses. See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process. You can view details about the running instances that are created by using the AWS EC2 console . 13.19. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 13.20. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 13.21. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 13.22. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 13.22.1. Image registry storage configuration Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information. 13.22.1.1. Configuring registry storage for AWS with user-provisioned infrastructure During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage. If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure. Prerequisites You have a cluster on AWS with user-provisioned infrastructure. For Amazon S3 storage, the secret is expected to contain two keys: REGISTRY_STORAGE_S3_ACCESSKEY REGISTRY_STORAGE_S3_SECRETKEY Procedure Use the following procedure if the Registry Operator cannot create an S3 bucket and automatically configure storage. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster : USD oc edit configs.imageregistry.operator.openshift.io/cluster Example configuration storage: s3: bucket: <bucket-name> region: <region-name> Warning To secure your registry images in AWS, block public access to the S3 bucket. 13.22.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 13.23. Deleting the bootstrap resources After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS). Prerequisites You completed the initial Operator configuration for your cluster. Procedure Delete the bootstrap resources. If you used the CloudFormation template, delete its stack : Delete the stack by using the AWS CLI: USD aws cloudformation delete-stack --stack-name <name> 1 1 <name> is the name of your bootstrap stack. Delete the stack by using the AWS CloudFormation console . 13.24. Creating the Ingress DNS Records If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias. Prerequisites You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned. You installed the OpenShift CLI ( oc ). You installed the jq package. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) . Procedure Determine the routes to create. To create a wildcard record, use *.apps.<cluster_name>.<domain_name> , where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster. To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name> Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m Locate the hosted zone ID for the load balancer: USD aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 1 1 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer that you obtained. Example output Z3AADJGX6KTTL2 The output of this command is the load balancer hosted zone ID. Obtain the public hosted zone ID for your cluster's domain: USD aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text 1 2 For <domain_name> , specify the Route 53 base domain for your OpenShift Container Platform cluster. Example output /hostedzone/Z3URY6TWQ91KVV The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV . Add the alias records to your private zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <private_hosted_zone_id> , specify the value from the output of the CloudFormation template for DNS and load balancing. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. Add the records to your public zone: USD aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 1 > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", 2 > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", 3 > "DNSName": "<external_ip>.", 4 > "EvaluateTargetHealth": false > } > } > } > ] > }' 1 For <public_hosted_zone_id> , specify the public hosted zone for your domain. 2 For <cluster_domain> , specify the domain or subdomain that you use with your OpenShift Container Platform cluster. 3 For <hosted_zone_id> , specify the public hosted zone ID for the load balancer that you obtained. 4 For <external_ip> , specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period ( . ) in this parameter value. 13.25. Completing an AWS installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion. Prerequisites You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure. You installed the oc CLI. Procedure From the directory that contains the installation program, complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 13.26. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 13.27. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 13.28. Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. 13.29. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name>", "0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "jq -r .infraID <installation_directory>/metadata.json 1", "openshift-vw9j6 1", "[ { \"ParameterKey\": \"VpcCidr\", 1 \"ParameterValue\": \"10.0.0.0/16\" 2 }, { \"ParameterKey\": \"AvailabilityZoneCount\", 3 \"ParameterValue\": \"1\" 4 }, { \"ParameterKey\": \"SubnetBits\", 5 \"ParameterValue\": \"12\" 6 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for Best Practice VPC with 1-3 AZs Parameters: VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String AvailabilityZoneCount: ConstraintDescription: \"The number of availability zones. (Min: 1, Max: 3)\" MinValue: 1 MaxValue: 3 Default: 1 Description: \"How many AZs to create VPC subnets for. (Min: 1, Max: 3)\" Type: Number SubnetBits: ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. MinValue: 5 MaxValue: 13 Default: 12 Description: \"Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)\" Type: Number Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Network Configuration\" Parameters: - VpcCidr - SubnetBits - Label: default: \"Availability Zones\" Parameters: - AvailabilityZoneCount ParameterLabels: AvailabilityZoneCount: default: \"Availability Zone Count\" VpcCidr: default: \"VPC CIDR\" SubnetBits: default: \"Bits Per Subnet\" Conditions: DoAz3: !Equals [3, !Ref AvailabilityZoneCount] DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] Resources: VPC: Type: \"AWS::EC2::VPC\" Properties: EnableDnsSupport: \"true\" EnableDnsHostnames: \"true\" CidrBlock: !Ref VpcCidr PublicSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PublicSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" InternetGateway: Type: \"AWS::EC2::InternetGateway\" GatewayToInternet: Type: \"AWS::EC2::VPCGatewayAttachment\" Properties: VpcId: !Ref VPC InternetGatewayId: !Ref InternetGateway PublicRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PublicRoute: Type: \"AWS::EC2::Route\" DependsOn: GatewayToInternet Properties: RouteTableId: !Ref PublicRouteTable DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref InternetGateway PublicSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PublicSubnet2 RouteTableId: !Ref PublicRouteTable PublicSubnetRouteTableAssociation3: Condition: DoAz3 Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PublicSubnet3 RouteTableId: !Ref PublicRouteTable PrivateSubnet: Type: \"AWS::EC2::Subnet\" Properties: VpcId: !Ref VPC CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 0 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable: Type: \"AWS::EC2::RouteTable\" Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Properties: SubnetId: !Ref PrivateSubnet RouteTableId: !Ref PrivateRouteTable NAT: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Properties: AllocationId: \"Fn::GetAtt\": - EIP - AllocationId SubnetId: !Ref PublicSubnet EIP: Type: \"AWS::EC2::EIP\" Properties: Domain: vpc Route: Type: \"AWS::EC2::Route\" Properties: RouteTableId: Ref: PrivateRouteTable DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT PrivateSubnet2: Type: \"AWS::EC2::Subnet\" Condition: DoAz2 Properties: VpcId: !Ref VPC CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 1 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable2: Type: \"AWS::EC2::RouteTable\" Condition: DoAz2 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation2: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz2 Properties: SubnetId: !Ref PrivateSubnet2 RouteTableId: !Ref PrivateRouteTable2 NAT2: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz2 Properties: AllocationId: \"Fn::GetAtt\": - EIP2 - AllocationId SubnetId: !Ref PublicSubnet2 EIP2: Type: \"AWS::EC2::EIP\" Condition: DoAz2 Properties: Domain: vpc Route2: Type: \"AWS::EC2::Route\" Condition: DoAz2 Properties: RouteTableId: Ref: PrivateRouteTable2 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT2 PrivateSubnet3: Type: \"AWS::EC2::Subnet\" Condition: DoAz3 Properties: VpcId: !Ref VPC CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] AvailabilityZone: !Select - 2 - Fn::GetAZs: !Ref \"AWS::Region\" PrivateRouteTable3: Type: \"AWS::EC2::RouteTable\" Condition: DoAz3 Properties: VpcId: !Ref VPC PrivateSubnetRouteTableAssociation3: Type: \"AWS::EC2::SubnetRouteTableAssociation\" Condition: DoAz3 Properties: SubnetId: !Ref PrivateSubnet3 RouteTableId: !Ref PrivateRouteTable3 NAT3: DependsOn: - GatewayToInternet Type: \"AWS::EC2::NatGateway\" Condition: DoAz3 Properties: AllocationId: \"Fn::GetAtt\": - EIP3 - AllocationId SubnetId: !Ref PublicSubnet3 EIP3: Type: \"AWS::EC2::EIP\" Condition: DoAz3 Properties: Domain: vpc Route3: Type: \"AWS::EC2::Route\" Condition: DoAz3 Properties: RouteTableId: Ref: PrivateRouteTable3 DestinationCidrBlock: 0.0.0.0/0 NatGatewayId: Ref: NAT3 S3Endpoint: Type: AWS::EC2::VPCEndpoint Properties: PolicyDocument: Version: 2012-10-17 Statement: - Effect: Allow Principal: '*' Action: - '*' Resource: - '*' RouteTableIds: - !Ref PublicRouteTable - !Ref PrivateRouteTable - !If [DoAz2, !Ref PrivateRouteTable2, !Ref \"AWS::NoValue\"] - !If [DoAz3, !Ref PrivateRouteTable3, !Ref \"AWS::NoValue\"] ServiceName: !Join - '' - - com.amazonaws. - !Ref 'AWS::Region' - .s3 VpcId: !Ref VPC Outputs: VpcId: Description: ID of the new VPC. Value: !Ref VPC PublicSubnetIds: Description: Subnet IDs of the public subnets. Value: !Join [ \",\", [!Ref PublicSubnet, !If [DoAz2, !Ref PublicSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PublicSubnet3, !Ref \"AWS::NoValue\"]] ] PrivateSubnetIds: Description: Subnet IDs of the private subnets. Value: !Join [ \",\", [!Ref PrivateSubnet, !If [DoAz2, !Ref PrivateSubnet2, !Ref \"AWS::NoValue\"], !If [DoAz3, !Ref PrivateSubnet3, !Ref \"AWS::NoValue\"]] ] PublicRouteTableId: Description: Public Route table ID Value: !Ref PublicRouteTable", "aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 1", "mycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10", "[ { \"ParameterKey\": \"ClusterName\", 1 \"ParameterValue\": \"mycluster\" 2 }, { \"ParameterKey\": \"InfrastructureName\", 3 \"ParameterValue\": \"mycluster-<random_string>\" 4 }, { \"ParameterKey\": \"HostedZoneId\", 5 \"ParameterValue\": \"<random_string>\" 6 }, { \"ParameterKey\": \"HostedZoneName\", 7 \"ParameterValue\": \"example.com\" 8 }, { \"ParameterKey\": \"PublicSubnets\", 9 \"ParameterValue\": \"subnet-<random_string>\" 10 }, { \"ParameterKey\": \"PrivateSubnets\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"VpcId\", 13 \"ParameterValue\": \"vpc-<random_string>\" 14 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Network Elements (Route53 & LBs) Parameters: ClusterName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Cluster name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, representative cluster name to use for host names and other identifying names. Type: String InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String HostedZoneId: Description: The Route53 public zone ID to register the targets with, such as Z21IXYZABCZ2A4. Type: String HostedZoneName: Description: The Route53 zone to register the targets with, such as example.com. Omit the trailing period. Type: String Default: \"example.com\" PublicSubnets: Description: The internet-facing subnets. Type: List<AWS::EC2::Subnet::Id> PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - ClusterName - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - PublicSubnets - PrivateSubnets - Label: default: \"DNS\" Parameters: - HostedZoneName - HostedZoneId ParameterLabels: ClusterName: default: \"Cluster Name\" InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" PublicSubnets: default: \"Public Subnets\" PrivateSubnets: default: \"Private Subnets\" HostedZoneName: default: \"Public Hosted Zone Name\" HostedZoneId: default: \"Public Hosted Zone ID\" Resources: ExtApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"ext\"]] IpAddressType: ipv4 Subnets: !Ref PublicSubnets Type: network IntApiElb: Type: AWS::ElasticLoadBalancingV2::LoadBalancer Properties: Name: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] Scheme: internal IpAddressType: ipv4 Subnets: !Ref PrivateSubnets Type: network IntDns: Type: \"AWS::Route53::HostedZone\" Properties: HostedZoneConfig: Comment: \"Managed by CloudFormation\" Name: !Join [\".\", [!Ref ClusterName, !Ref HostedZoneName]] HostedZoneTags: - Key: Name Value: !Join [\"-\", [!Ref InfrastructureName, \"int\"]] - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"owned\" VPCs: - VPCId: !Ref VpcId VPCRegion: !Ref \"AWS::Region\" ExternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref HostedZoneId RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt ExtApiElb.CanonicalHostedZoneID DNSName: !GetAtt ExtApiElb.DNSName InternalApiServerRecord: Type: AWS::Route53::RecordSetGroup Properties: Comment: Alias record for the API server HostedZoneId: !Ref IntDns RecordSets: - Name: !Join [ \".\", [\"api\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName - Name: !Join [ \".\", [\"api-int\", !Ref ClusterName, !Join [\"\", [!Ref HostedZoneName, \".\"]]], ] Type: A AliasTarget: HostedZoneId: !GetAtt IntApiElb.CanonicalHostedZoneID DNSName: !GetAtt IntApiElb.DNSName ExternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: ExternalApiTargetGroup LoadBalancerArn: Ref: ExtApiElb Port: 6443 Protocol: TCP ExternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalApiListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalApiTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 6443 Protocol: TCP InternalApiTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/readyz\" HealthCheckPort: 6443 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 6443 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 InternalServiceInternalListener: Type: AWS::ElasticLoadBalancingV2::Listener Properties: DefaultActions: - Type: forward TargetGroupArn: Ref: InternalServiceTargetGroup LoadBalancerArn: Ref: IntApiElb Port: 22623 Protocol: TCP InternalServiceTargetGroup: Type: AWS::ElasticLoadBalancingV2::TargetGroup Properties: HealthCheckIntervalSeconds: 10 HealthCheckPath: \"/healthz\" HealthCheckPort: 22623 HealthCheckProtocol: HTTPS HealthyThresholdCount: 2 UnhealthyThresholdCount: 2 Port: 22623 Protocol: TCP TargetType: ip VpcId: Ref: VpcId TargetGroupAttributes: - Key: deregistration_delay.timeout_seconds Value: 60 RegisterTargetLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"nlb\", \"lambda\", \"role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalApiTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref InternalServiceTargetGroup - Effect: \"Allow\" Action: [ \"elasticloadbalancing:RegisterTargets\", \"elasticloadbalancing:DeregisterTargets\", ] Resource: !Ref ExternalApiTargetGroup RegisterNlbIpTargets: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterTargetLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): elb = boto3.client('elbv2') if event['RequestType'] == 'Delete': elb.deregister_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) elif event['RequestType'] == 'Create': elb.register_targets(TargetGroupArn=event['ResourceProperties']['TargetArn'],Targets=[{'Id': event['ResourceProperties']['TargetIp']}]) responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['TargetArn']+event['ResourceProperties']['TargetIp']) Runtime: \"python3.8\" Timeout: 120 RegisterSubnetTagsLambdaIamRole: Type: AWS::IAM::Role Properties: RoleName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tags-lambda-role\"]] AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"lambda.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"subnet-tagging-policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: [ \"ec2:DeleteTags\", \"ec2:CreateTags\" ] Resource: \"arn:aws:ec2:*:*:subnet/*\" - Effect: \"Allow\" Action: [ \"ec2:DescribeSubnets\", \"ec2:DescribeTags\" ] Resource: \"*\" RegisterSubnetTags: Type: \"AWS::Lambda::Function\" Properties: Handler: \"index.handler\" Role: Fn::GetAtt: - \"RegisterSubnetTagsLambdaIamRole\" - \"Arn\" Code: ZipFile: | import json import boto3 import cfnresponse def handler(event, context): ec2_client = boto3.client('ec2') if event['RequestType'] == 'Delete': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.delete_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName']}]); elif event['RequestType'] == 'Create': for subnet_id in event['ResourceProperties']['Subnets']: ec2_client.create_tags(Resources=[subnet_id], Tags=[{'Key': 'kubernetes.io/cluster/' + event['ResourceProperties']['InfrastructureName'], 'Value': 'shared'}]); responseData = {} cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, event['ResourceProperties']['InfrastructureName']+event['ResourceProperties']['Subnets'][0]) Runtime: \"python3.8\" Timeout: 120 RegisterPublicSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PublicSubnets RegisterPrivateSubnetTags: Type: Custom::SubnetRegister Properties: ServiceToken: !GetAtt RegisterSubnetTags.Arn InfrastructureName: !Ref InfrastructureName Subnets: !Ref PrivateSubnets Outputs: PrivateHostedZoneId: Description: Hosted zone ID for the private DNS, which is required for private records. Value: !Ref IntDns ExternalApiLoadBalancerName: Description: Full name of the external API load balancer. Value: !GetAtt ExtApiElb.LoadBalancerFullName InternalApiLoadBalancerName: Description: Full name of the internal API load balancer. Value: !GetAtt IntApiElb.LoadBalancerFullName ApiServerDnsName: Description: Full hostname of the API server, which is required for the Ignition config files. Value: !Join [\".\", [\"api-int\", !Ref ClusterName, !Ref HostedZoneName]] RegisterNlbIpTargetsLambda: Description: Lambda ARN useful to help register or deregister IP targets for these load balancers. Value: !GetAtt RegisterNlbIpTargets.Arn ExternalApiTargetGroupArn: Description: ARN of the external API target group. Value: !Ref ExternalApiTargetGroup InternalApiTargetGroupArn: Description: ARN of the internal API target group. Value: !Ref InternalApiTargetGroup InternalServiceTargetGroupArn: Description: ARN of the internal service target group. Value: !Ref InternalServiceTargetGroup", "Type: CNAME TTL: 10 ResourceRecords: - !GetAtt IntApiElb.DNSName", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"VpcCidr\", 3 \"ParameterValue\": \"10.0.0.0/16\" 4 }, { \"ParameterKey\": \"PrivateSubnets\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"VpcId\", 7 \"ParameterValue\": \"vpc-<random_string>\" 8 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Security Elements (Security Groups & IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String VpcCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(1[6-9]|2[0-4]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. Default: 10.0.0.0/16 Description: CIDR block for VPC. Type: String VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id PrivateSubnets: Description: The internal subnets. Type: List<AWS::EC2::Subnet::Id> Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Network Configuration\" Parameters: - VpcId - VpcCidr - PrivateSubnets ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" VpcCidr: default: \"VPC CIDR\" PrivateSubnets: default: \"Private Subnets\" Resources: MasterSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Master Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr - IpProtocol: tcp ToPort: 6443 FromPort: 6443 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22623 ToPort: 22623 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId WorkerSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Worker Security Group SecurityGroupIngress: - IpProtocol: icmp FromPort: 0 ToPort: 0 CidrIp: !Ref VpcCidr - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref VpcCidr VpcId: !Ref VpcId MasterIngressEtcd: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: etcd FromPort: 2379 ToPort: 2380 IpProtocol: tcp MasterIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressWorkerVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp MasterIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressWorkerGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp MasterIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressWorkerIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp MasterIngressWorkerIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp MasterIngressWorkerIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 MasterIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressWorkerInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp MasterIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressWorkerInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp MasterIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes kubelet, scheduler and controller manager FromPort: 10250 ToPort: 10259 IpProtocol: tcp MasterIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressWorkerIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp MasterIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIngressWorkerIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt MasterSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressMasterVxlan: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Vxlan packets FromPort: 4789 ToPort: 4789 IpProtocol: udp WorkerIngressGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressMasterGeneve: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Geneve packets FromPort: 6081 ToPort: 6081 IpProtocol: udp WorkerIngressIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressMasterIpsecIke: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec IKE packets FromPort: 500 ToPort: 500 IpProtocol: udp WorkerIngressMasterIpsecNat: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec NAT-T packets FromPort: 4500 ToPort: 4500 IpProtocol: udp WorkerIngressMasterIpsecEsp: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: IPsec ESP packets IpProtocol: 50 WorkerIngressInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressMasterInternal: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: tcp WorkerIngressInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressMasterInternalUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal cluster communication FromPort: 9000 ToPort: 9999 IpProtocol: udp WorkerIngressKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes secure kubelet port FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressWorkerKube: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Internal Kubernetes communication FromPort: 10250 ToPort: 10250 IpProtocol: tcp WorkerIngressIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressMasterIngressServices: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: tcp WorkerIngressIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt WorkerSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp WorkerIngressMasterIngressServicesUDP: Type: AWS::EC2::SecurityGroupIngress Properties: GroupId: !GetAtt WorkerSecurityGroup.GroupId SourceSecurityGroupId: !GetAtt MasterSecurityGroup.GroupId Description: Kubernetes ingress services FromPort: 30000 ToPort: 32767 IpProtocol: udp MasterIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"master\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:AttachVolume\" - \"ec2:AuthorizeSecurityGroupIngress\" - \"ec2:CreateSecurityGroup\" - \"ec2:CreateTags\" - \"ec2:CreateVolume\" - \"ec2:DeleteSecurityGroup\" - \"ec2:DeleteVolume\" - \"ec2:Describe*\" - \"ec2:DetachVolume\" - \"ec2:ModifyInstanceAttribute\" - \"ec2:ModifyVolume\" - \"ec2:RevokeSecurityGroupIngress\" - \"elasticloadbalancing:AddTags\" - \"elasticloadbalancing:AttachLoadBalancerToSubnets\" - \"elasticloadbalancing:ApplySecurityGroupsToLoadBalancer\" - \"elasticloadbalancing:CreateListener\" - \"elasticloadbalancing:CreateLoadBalancer\" - \"elasticloadbalancing:CreateLoadBalancerPolicy\" - \"elasticloadbalancing:CreateLoadBalancerListeners\" - \"elasticloadbalancing:CreateTargetGroup\" - \"elasticloadbalancing:ConfigureHealthCheck\" - \"elasticloadbalancing:DeleteListener\" - \"elasticloadbalancing:DeleteLoadBalancer\" - \"elasticloadbalancing:DeleteLoadBalancerListeners\" - \"elasticloadbalancing:DeleteTargetGroup\" - \"elasticloadbalancing:DeregisterInstancesFromLoadBalancer\" - \"elasticloadbalancing:DeregisterTargets\" - \"elasticloadbalancing:Describe*\" - \"elasticloadbalancing:DetachLoadBalancerFromSubnets\" - \"elasticloadbalancing:ModifyListener\" - \"elasticloadbalancing:ModifyLoadBalancerAttributes\" - \"elasticloadbalancing:ModifyTargetGroup\" - \"elasticloadbalancing:ModifyTargetGroupAttributes\" - \"elasticloadbalancing:RegisterInstancesWithLoadBalancer\" - \"elasticloadbalancing:RegisterTargets\" - \"elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer\" - \"elasticloadbalancing:SetLoadBalancerPoliciesOfListener\" - \"kms:DescribeKey\" Resource: \"*\" MasterInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"MasterIamRole\" WorkerIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"worker\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: - \"ec2:DescribeInstances\" - \"ec2:DescribeRegions\" Resource: \"*\" WorkerInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Roles: - Ref: \"WorkerIamRole\" Outputs: MasterSecurityGroupId: Description: Master Security Group ID Value: !GetAtt MasterSecurityGroup.GroupId WorkerSecurityGroupId: Description: Worker Security Group ID Value: !GetAtt WorkerSecurityGroup.GroupId MasterInstanceProfile: Description: Master IAM Instance Profile Value: !Ref MasterInstanceProfile WorkerInstanceProfile: Description: Worker IAM Instance Profile Value: !Ref WorkerInstanceProfile", "openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions[\"us-west-1\"].image'", "ami-0d3e625f84626bbda", "openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions[\"us-west-1\"].image'", "ami-0af1d3b7fa5be2131", "export AWS_PROFILE=<aws_profile> 1", "export AWS_DEFAULT_REGION=<aws_region> 1", "export RHCOS_VERSION=<version> 1", "export VMIMPORT_BUCKET_NAME=<s3_bucket_name>", "cat <<EOF > containers.json { \"Description\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\", \"Format\": \"vmdk\", \"UserBucket\": { \"S3Bucket\": \"USD{VMIMPORT_BUCKET_NAME}\", \"S3Key\": \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64.vmdk\" } } EOF", "aws ec2 import-snapshot --region USD{AWS_DEFAULT_REGION} --description \"<description>\" \\ 1 --disk-container \"file://<file_path>/containers.json\" 2", "watch -n 5 aws ec2 describe-import-snapshot-tasks --region USD{AWS_DEFAULT_REGION}", "{ \"ImportSnapshotTasks\": [ { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"ImportTaskId\": \"import-snap-fh6i8uil\", \"SnapshotTaskDetail\": { \"Description\": \"rhcos-4.7.0-x86_64-aws.x86_64\", \"DiskImageSize\": 819056640.0, \"Format\": \"VMDK\", \"SnapshotId\": \"snap-06331325870076318\", \"Status\": \"completed\", \"UserBucket\": { \"S3Bucket\": \"external-images\", \"S3Key\": \"rhcos-4.7.0-x86_64-aws.x86_64.vmdk\" } } } ] }", "aws ec2 register-image --region USD{AWS_DEFAULT_REGION} --architecture x86_64 \\ 1 --description \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 2 --ena-support --name \"rhcos-USD{RHCOS_VERSION}-x86_64-aws.x86_64\" \\ 3 --virtualization-type hvm --root-device-name '/dev/xvda' --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}' 4", "aws s3 mb s3://<cluster-name>-infra 1", "aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 1", "aws s3 ls s3://<cluster-name>-infra/", "2019-04-03 16:15:16 314878 bootstrap.ign", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AllowedBootstrapSshCidr\", 5 \"ParameterValue\": \"0.0.0.0/0\" 6 }, { \"ParameterKey\": \"PublicSubnet\", 7 \"ParameterValue\": \"subnet-<random_string>\" 8 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 9 \"ParameterValue\": \"sg-<random_string>\" 10 }, { \"ParameterKey\": \"VpcId\", 11 \"ParameterValue\": \"vpc-<random_string>\" 12 }, { \"ParameterKey\": \"BootstrapIgnitionLocation\", 13 \"ParameterValue\": \"s3://<bucket_name>/bootstrap.ign\" 14 }, { \"ParameterKey\": \"AutoRegisterELB\", 15 \"ParameterValue\": \"yes\" 16 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 17 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 18 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 19 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 20 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 21 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 22 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 23 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 24 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3 --capabilities CAPABILITY_NAMED_IAM 4", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Bootstrap (EC2 Instance, Security Groups and IAM) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag cloud resources and identify items owned or used by the cluster. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AllowedBootstrapSshCidr: AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/([0-9]|1[0-9]|2[0-9]|3[0-2]))USD ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/0-32. Default: 0.0.0.0/0 Description: CIDR block to allow SSH access to the bootstrap node. Type: String PublicSubnet: Description: The public subnet to launch the bootstrap node into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID for registering temporary rules. Type: AWS::EC2::SecurityGroup::Id VpcId: Description: The VPC-scoped resources will belong to this VPC. Type: AWS::EC2::VPC::Id BootstrapIgnitionLocation: Default: s3://my-s3-bucket/bootstrap.ign Description: Ignition config file location. Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Type: String BootstrapInstanceType: Description: Instance type for the bootstrap EC2 instance Default: \"i3.large\" Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - RhcosAmi - BootstrapIgnitionLocation - MasterSecurityGroupId - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - PublicSubnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" AllowedBootstrapSshCidr: default: \"Allowed SSH Source\" PublicSubnet: default: \"Public Subnet\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Bootstrap Ignition Source\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: BootstrapIamRole: Type: AWS::IAM::Role Properties: AssumeRolePolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Principal: Service: - \"ec2.amazonaws.com\" Action: - \"sts:AssumeRole\" Path: \"/\" Policies: - PolicyName: !Join [\"-\", [!Ref InfrastructureName, \"bootstrap\", \"policy\"]] PolicyDocument: Version: \"2012-10-17\" Statement: - Effect: \"Allow\" Action: \"ec2:Describe*\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:AttachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"ec2:DetachVolume\" Resource: \"*\" - Effect: \"Allow\" Action: \"s3:GetObject\" Resource: \"*\" BootstrapInstanceProfile: Type: \"AWS::IAM::InstanceProfile\" Properties: Path: \"/\" Roles: - Ref: \"BootstrapIamRole\" BootstrapSecurityGroup: Type: AWS::EC2::SecurityGroup Properties: GroupDescription: Cluster Bootstrap Security Group SecurityGroupIngress: - IpProtocol: tcp FromPort: 22 ToPort: 22 CidrIp: !Ref AllowedBootstrapSshCidr - IpProtocol: tcp ToPort: 19531 FromPort: 19531 CidrIp: 0.0.0.0/0 VpcId: !Ref VpcId BootstrapInstance: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi IamInstanceProfile: !Ref BootstrapInstanceProfile InstanceType: !Ref BootstrapInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"true\" DeviceIndex: \"0\" GroupSet: - !Ref \"BootstrapSecurityGroup\" - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"PublicSubnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"replace\":{\"source\":\"USD{S3Loc}\"}},\"version\":\"3.1.0\"}}' - { S3Loc: !Ref BootstrapIgnitionLocation } RegisterBootstrapApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp RegisterBootstrapInternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt BootstrapInstance.PrivateIp Outputs: BootstrapInstanceId: Description: Bootstrap Instance ID. Value: !Ref BootstrapInstance BootstrapPublicIp: Description: The bootstrap node public IP address. Value: !GetAtt BootstrapInstance.PublicIp BootstrapPrivateIp: Description: The bootstrap node private IP address. Value: !GetAtt BootstrapInstance.PrivateIp", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"AutoRegisterDNS\", 5 \"ParameterValue\": \"yes\" 6 }, { \"ParameterKey\": \"PrivateHostedZoneId\", 7 \"ParameterValue\": \"<random_string>\" 8 }, { \"ParameterKey\": \"PrivateHostedZoneName\", 9 \"ParameterValue\": \"mycluster.example.com\" 10 }, { \"ParameterKey\": \"Master0Subnet\", 11 \"ParameterValue\": \"subnet-<random_string>\" 12 }, { \"ParameterKey\": \"Master1Subnet\", 13 \"ParameterValue\": \"subnet-<random_string>\" 14 }, { \"ParameterKey\": \"Master2Subnet\", 15 \"ParameterValue\": \"subnet-<random_string>\" 16 }, { \"ParameterKey\": \"MasterSecurityGroupId\", 17 \"ParameterValue\": \"sg-<random_string>\" 18 }, { \"ParameterKey\": \"IgnitionLocation\", 19 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/master\" 20 }, { \"ParameterKey\": \"CertificateAuthorities\", 21 \"ParameterValue\": \"data:text/plain;charset=utf-8;base64,ABC...xYz==\" 22 }, { \"ParameterKey\": \"MasterInstanceProfileName\", 23 \"ParameterValue\": \"<roles_stack>-MasterInstanceProfile-<random_string>\" 24 }, { \"ParameterKey\": \"MasterInstanceType\", 25 \"ParameterValue\": \"\" 26 }, { \"ParameterKey\": \"AutoRegisterELB\", 27 \"ParameterValue\": \"yes\" 28 }, { \"ParameterKey\": \"RegisterNlbIpTargetsLambdaArn\", 29 \"ParameterValue\": \"arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>\" 30 }, { \"ParameterKey\": \"ExternalApiTargetGroupArn\", 31 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>\" 32 }, { \"ParameterKey\": \"InternalApiTargetGroupArn\", 33 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 34 }, { \"ParameterKey\": \"InternalServiceTargetGroupArn\", 35 \"ParameterValue\": \"arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>\" 36 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 master instances) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id AutoRegisterDNS: Default: \"\" Description: unused Type: String PrivateHostedZoneId: Default: \"\" Description: unused Type: String PrivateHostedZoneName: Default: \"\" Description: unused Type: String Master0Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master1Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id Master2Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id MasterSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/master Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String MasterInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String MasterInstanceType: Default: m5.xlarge Type: String AutoRegisterELB: Default: \"yes\" AllowedValues: - \"yes\" - \"no\" Description: Do you want to invoke NLB registration, which requires a Lambda ARN parameter? Type: String RegisterNlbIpTargetsLambdaArn: Description: ARN for NLB IP target registration lambda. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String ExternalApiTargetGroupArn: Description: ARN for external API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalApiTargetGroupArn: Description: ARN for internal API load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String InternalServiceTargetGroupArn: Description: ARN for internal service load balancer target group. Supply the value from the cluster infrastructure or select \"no\" for AutoRegisterELB. Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - MasterInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - MasterSecurityGroupId - MasterInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - VpcId - AllowedBootstrapSshCidr - Master0Subnet - Master1Subnet - Master2Subnet - Label: default: \"Load Balancer Automation\" Parameters: - AutoRegisterELB - RegisterNlbIpTargetsLambdaArn - ExternalApiTargetGroupArn - InternalApiTargetGroupArn - InternalServiceTargetGroupArn ParameterLabels: InfrastructureName: default: \"Infrastructure Name\" VpcId: default: \"VPC ID\" Master0Subnet: default: \"Master-0 Subnet\" Master1Subnet: default: \"Master-1 Subnet\" Master2Subnet: default: \"Master-2 Subnet\" MasterInstanceType: default: \"Master Instance Type\" MasterInstanceProfileName: default: \"Master Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" BootstrapIgnitionLocation: default: \"Master Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" MasterSecurityGroupId: default: \"Master Security Group ID\" AutoRegisterELB: default: \"Use Provided ELB Automation\" Conditions: DoRegistration: !Equals [\"yes\", !Ref AutoRegisterELB] Resources: Master0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master0Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster0: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp RegisterMaster0InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master0.PrivateIp Master1: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master1Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster1: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp RegisterMaster1InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master1.PrivateIp Master2: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref MasterInstanceProfileName InstanceType: !Ref MasterInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"MasterSecurityGroupId\" SubnetId: !Ref \"Master2Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" RegisterMaster2: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref ExternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalApiTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalApiTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp RegisterMaster2InternalServiceTarget: Condition: DoRegistration Type: Custom::NLBRegister Properties: ServiceToken: !Ref RegisterNlbIpTargetsLambdaArn TargetArn: !Ref InternalServiceTargetGroupArn TargetIp: !GetAtt Master2.PrivateIp Outputs: PrivateIPs: Description: The control-plane node private IP addresses. Value: !Join [ \",\", [!GetAtt Master0.PrivateIp, !GetAtt Master1.PrivateIp, !GetAtt Master2.PrivateIp] ]", "[ { \"ParameterKey\": \"InfrastructureName\", 1 \"ParameterValue\": \"mycluster-<random_string>\" 2 }, { \"ParameterKey\": \"RhcosAmi\", 3 \"ParameterValue\": \"ami-<random_string>\" 4 }, { \"ParameterKey\": \"Subnet\", 5 \"ParameterValue\": \"subnet-<random_string>\" 6 }, { \"ParameterKey\": \"WorkerSecurityGroupId\", 7 \"ParameterValue\": \"sg-<random_string>\" 8 }, { \"ParameterKey\": \"IgnitionLocation\", 9 \"ParameterValue\": \"https://api-int.<cluster_name>.<domain_name>:22623/config/worker\" 10 }, { \"ParameterKey\": \"CertificateAuthorities\", 11 \"ParameterValue\": \"\" 12 }, { \"ParameterKey\": \"WorkerInstanceProfileName\", 13 \"ParameterValue\": \"\" 14 }, { \"ParameterKey\": \"WorkerInstanceType\", 15 \"ParameterValue\": \"\" 16 } ]", "aws cloudformation create-stack --stack-name <name> 1 --template-body file://<template>.yaml \\ 2 --parameters file://<parameters>.json 3", "arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59", "aws cloudformation describe-stacks --stack-name <name>", "AWSTemplateFormatVersion: 2010-09-09 Description: Template for OpenShift Cluster Node Launch (EC2 worker instance) Parameters: InfrastructureName: AllowedPattern: ^([a-zA-Z][a-zA-Z0-9\\-]{0,26})USD MaxLength: 27 MinLength: 1 ConstraintDescription: Infrastructure name must be alphanumeric, start with a letter, and have a maximum of 27 characters. Description: A short, unique cluster ID used to tag nodes for the kubelet cloud provider. Type: String RhcosAmi: Description: Current Red Hat Enterprise Linux CoreOS AMI to use for bootstrap. Type: AWS::EC2::Image::Id Subnet: Description: The subnets, recommend private, to launch the master nodes into. Type: AWS::EC2::Subnet::Id WorkerSecurityGroupId: Description: The master security group ID to associate with master nodes. Type: AWS::EC2::SecurityGroup::Id IgnitionLocation: Default: https://api-int.USDCLUSTER_NAME.USDDOMAIN:22623/config/worker Description: Ignition config file location. Type: String CertificateAuthorities: Default: data:text/plain;charset=utf-8;base64,ABC...xYz== Description: Base64 encoded certificate authority string to use. Type: String WorkerInstanceProfileName: Description: IAM profile to associate with master nodes. Type: String WorkerInstanceType: Default: m5.large Type: String Metadata: AWS::CloudFormation::Interface: ParameterGroups: - Label: default: \"Cluster Information\" Parameters: - InfrastructureName - Label: default: \"Host Information\" Parameters: - WorkerInstanceType - RhcosAmi - IgnitionLocation - CertificateAuthorities - WorkerSecurityGroupId - WorkerInstanceProfileName - Label: default: \"Network Configuration\" Parameters: - Subnet ParameterLabels: Subnet: default: \"Subnet\" InfrastructureName: default: \"Infrastructure Name\" WorkerInstanceType: default: \"Worker Instance Type\" WorkerInstanceProfileName: default: \"Worker Instance Profile Name\" RhcosAmi: default: \"Red Hat Enterprise Linux CoreOS AMI ID\" IgnitionLocation: default: \"Worker Ignition Source\" CertificateAuthorities: default: \"Ignition CA String\" WorkerSecurityGroupId: default: \"Worker Security Group ID\" Resources: Worker0: Type: AWS::EC2::Instance Properties: ImageId: !Ref RhcosAmi BlockDeviceMappings: - DeviceName: /dev/xvda Ebs: VolumeSize: \"120\" VolumeType: \"gp2\" IamInstanceProfile: !Ref WorkerInstanceProfileName InstanceType: !Ref WorkerInstanceType NetworkInterfaces: - AssociatePublicIpAddress: \"false\" DeviceIndex: \"0\" GroupSet: - !Ref \"WorkerSecurityGroupId\" SubnetId: !Ref \"Subnet\" UserData: Fn::Base64: !Sub - '{\"ignition\":{\"config\":{\"merge\":[{\"source\":\"USD{SOURCE}\"}]},\"security\":{\"tls\":{\"certificateAuthorities\":[{\"source\":\"USD{CA_BUNDLE}\"}]}},\"version\":\"3.1.0\"}}' - { SOURCE: !Ref IgnitionLocation, CA_BUNDLE: !Ref CertificateAuthorities, } Tags: - Key: !Join [\"\", [\"kubernetes.io/cluster/\", !Ref InfrastructureName]] Value: \"shared\" Outputs: PrivateIP: Description: The compute node private IP address. Value: !GetAtt Worker0.PrivateIp", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "aws cloudformation delete-stack --stack-name <name> 1", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m", "aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == \"<external_ip>\").CanonicalHostedZoneNameID' 1", "Z3AADJGX6KTTL2", "aws route53 list-hosted-zones-by-name --dns-name \"<domain_name>\" \\ 1 --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 2 --output text", "/hostedzone/Z3URY6TWQ91KVV", "aws route53 change-resource-record-sets --hosted-zone-id \"<private_hosted_zone_id>\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "aws route53 change-resource-record-sets --hosted-zone-id \"<public_hosted_zone_id>\"\" --change-batch '{ 1 > \"Changes\": [ > { > \"Action\": \"CREATE\", > \"ResourceRecordSet\": { > \"Name\": \"\\\\052.apps.<cluster_domain>\", 2 > \"Type\": \"A\", > \"AliasTarget\":{ > \"HostedZoneId\": \"<hosted_zone_id>\", 3 > \"DNSName\": \"<external_ip>.\", 4 > \"EvaluateTargetHealth\": false > } > } > } > ] > }'", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize INFO Waiting up to 10m0s for the openshift-console route to be created INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 1s", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_aws/installing-aws-user-infra
Chapter 3. Creating and building an application using the web console
Chapter 3. Creating and building an application using the web console 3.1. Before you begin Review Accessing the web console . You must be able to access a running instance of OpenShift Container Platform. If you do not have access, contact your cluster administrator. 3.2. Logging in to the web console You can log in to the OpenShift Container Platform web console to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. You are redirected to the Projects page. For non-administrative users, the default view is the Developer perspective. For cluster administrators, the default view is the Administrator perspective. If you do not have cluster-admin privileges, you will not see the Administrator perspective in your web console. The web console provides two perspectives: the Administrator perspective and Developer perspective. The Developer perspective provides workflows specific to the developer use cases. Figure 3.1. Perspective switcher Use the perspective switcher to switch to the Developer perspective. The Topology view with options to create an application is displayed. 3.3. Creating a new project A project enables a community of users to organize and manage their content in isolation. Projects are OpenShift Container Platform extensions to Kubernetes namespaces. Projects have additional features that enable user self-provisioning. Users must receive access to projects from administrators. Cluster administrators can allow developers to create their own projects. In most cases, users automatically have access to their own projects. Each project has its own set of objects, policies, constraints, and service accounts. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure In the +Add view, select Project Create Project . In the Name field, enter user-getting-started . Optional: In the Display name field, enter Getting Started with OpenShift . Note Display name and Description fields are optional. Click Create . You have created your first project on OpenShift Container Platform. Additional resources Default cluster roles Viewing a project using the web console Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.4. Granting view permissions OpenShift Container Platform automatically creates a few special service accounts in every project. The default service account takes responsibility for running the pods. OpenShift Container Platform uses and injects this service account into every pod that launches. The following procedure creates a RoleBinding object for the default ServiceAccount object. The service account communicates with the OpenShift Container Platform API to learn about pods, services, and resources within the project. Prerequisites You are logged in to the OpenShift Container Platform web console. You have a deployed image. You are in the Administrator perspective. Procedure Navigate to User Management and then click RoleBindings . Click Create binding . Select Namespace role binding (RoleBinding) . In the Name field, enter sa-user-account . In the Namespace field, search for and select user-getting-started . In the Role name field, search for view and select view . In the Subject field, select ServiceAccount . In the Subject namespace field, search for and select user-getting-started . In the Subject name field, enter default . Click Create . Additional resources Understanding authentication RBAC overview 3.5. Deploying your first image The simplest way to deploy an application in OpenShift Container Platform is to run an existing container image. The following procedure deploys a front end component of an application called national-parks-app . The web application displays an interactive map. The map displays the location of major national parks across the world. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have the appropriate roles and permissions in a project to create applications and other workloads in OpenShift Container Platform. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter the following: quay.io/openshiftroadshow/parksmap:latest Ensure that you have the current values for the following: Application: national-parks-app Name: parksmap Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=parksmap role=frontend Click Create . You are redirected to the Topology page where you can see the parksmap deployment in the national-parks-app application. Additional resources Creating applications using the Developer perspective Viewing a project using the web console Viewing the topology of your application Deleting a project using the web console 3.5.1. Examining the pod OpenShift Container Platform leverages the Kubernetes concept of a pod, which is one or more containers deployed together on one host, and the smallest compute unit that can be defined, deployed, and managed. Pods are the rough equivalent of a machine instance, physical or virtual, to a container. The Overview panel enables you to access many features of the parksmap deployment. The Details and Resources tabs enable you to scale application pods, check build status, services, and routes. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure Click D parksmap in the Topology view to open the Overview panel. Figure 3.2. Parksmap deployment The Overview panel includes tabs for Details , Resources , and Observe . The Details tab might be displayed by default. Table 3.1. Overview panel tab definitions Tab Defintion Details Enables you to scale your application and view pod configuration such as labels, annotations, and the status of the application. Resources Displays the resources that are associated with the deployment. Pods are the basic units of OpenShift Container Platform applications. You can see how many pods are being used, what their status is, and you can view the logs. Services that are created for your pod and assigned ports are listed under the Services heading. Routes enable external access to the pods and a URL is used to access them. Observe View various Events and Metrics information as it relates to your pod. Additional resources Interacting with applications and components Scaling application pods and checking builds and routes Labels and annotations used for the Topology view 3.5.2. Scaling the application In Kubernetes, a Deployment object defines how an application deploys. In most cases, users use Pod , Service , ReplicaSets , and Deployment resources together. In most cases, OpenShift Container Platform creates the resources for you. When you deploy the national-parks-app image, a deployment resource is created. In this example, only one Pod is deployed. The following procedure scales the national-parks-image to use two instances. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure In the Topology view, click the national-parks-app application. Click the Details tab. Use the up arrow to scale the pod to two instances. Figure 3.3. Scaling application Note Application scaling can happen quickly because OpenShift Container Platform is launching a new instance of an existing image. Use the down arrow to scale the pod down to one instance. Additional resources Recommended practices for scaling the cluster Understanding horizontal pod autoscalers About the Vertical Pod Autoscaler Operator 3.6. Deploying a Python application The following procedure deploys a back-end service for the parksmap application. The Python application performs 2D geo-spatial queries against a MongoDB database to locate and return map coordinates of all national parks in the world. The deployed back-end service that is nationalparks . Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Import from Git to open a dialog. Enter the following URL in the Git Repo URL field: https://github.com/openshift-roadshow/nationalparks-py.git A builder image is automatically detected. Note If the detected builder image is Dockerfile, select Edit Import Strategy . Select Builder Image and then click Python . Scroll to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: nationalparks Select Deployment as the Resource . Select Create route to the application . In the Advanced Options section, click Labels and add labels to better identify this deployment later. Labels help identify and filter components in the web console and in the command line. Add the following labels: app=national-parks-app component=nationalparks role=backend type=parksmap-backend Click Create . From the Topology view, select the nationalparks application. Note Click the Resources tab. In the Builds section, you can see your build running. Additional resources Adding services to your application Importing a codebase from Git to create an application Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7. Connecting to a database Deploy and connect a MongoDB database where the national-parks-app application stores location information. Once you mark the national-parks-app application as a backend for the map visualization tool, parksmap deployment uses the OpenShift Container Platform discover mechanism to display the map automatically. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the +Add view in the Developer perspective, click Container images to open a dialog. In the Image Name field, enter quay.io/centos7/mongodb-36-centos7 . In the Runtime icon field, search for mongodb . Scroll down to the General section. Ensure that you have the current values for the following: Application: national-parks-app Name: mongodb-nationalparks Select Deployment as the Resource . Unselect the checkbox to Create route to the application . In the Advanced Options section, click Deployment to add environment variables to add the following environment variables: Table 3.2. Environment variable names and values Name Value MONGODB_USER mongodb MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Additional resources Adding services to your application Viewing a project using the web console Viewing the topology of your application Providing access permissions to your project using the Developer perspective Deleting a project using the web console 3.7.1. Creating a secret The Secret object provides a mechanism to hold sensitive information such as passwords, OpenShift Container Platform client configuration files, private source repository credentials, and so on. Secrets decouple sensitive content from the pods. You can mount secrets into containers using a volume plugin or the system can use secrets to perform actions on behalf of a pod. The following procedure adds the secret nationalparks-mongodb-parameters and mounts it to the nationalparks workload. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Developer perspective, navigate to Secrets on the left hand navigation and click Secrets . Click Create Key/value secret . In the Secret name field, enter nationalparks-mongodb-parameters . Enter the following values for Key and Value : Table 3.3. Secret keys and values Key Value MONGODB_USER mongodb DATABASE_SERVICE_NAME mongodb-nationalparks MONGODB_PASSWORD mongodb MONGODB_DATABASE mongodb MONGODB_ADMIN_PASSWORD mongodb Click Create . Click Add Secret to workload . From the drop down menu, select nationalparks as the workload to add. Click Save . This change in configuration triggers a new rollout of the nationalparks deployment with the environment variables properly injected. Additional resources Understanding secrets 3.7.2. Loading data and displaying the national parks map You deployed the parksmap and nationalparks applications and then deployed the mongodb-nationalparks database. However, no data has been loaded into the database. Before loading the data, add the proper labels to the mongodb-nationalparks and nationalparks deployment. Prerequisites You are logged in to the OpenShift Container Platform web console. You are in the Developer perspective. You have a deployed image. Procedure From the Topology view, navigate to nationalparks deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser and add the following at the end of the URL: /ws/data/load Example output Items inserted in database: 2893 From the Topology view, navigate to parksmap deployment and click Resources and retrieve your route information. Copy and paste the URL into your web browser to view your national parks across the world map. Figure 3.4. National parks across the world Additional resources Providing access permissions to your project using the Developer perspective Labels and annotations used for the Topology view
[ "/ws/data/load", "Items inserted in database: 2893" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/getting_started/openshift-web-console
Chapter 4. Support for FIPS cryptography
Chapter 4. Support for FIPS cryptography You can install an OpenShift Container Platform cluster in FIPS mode. OpenShift Container Platform is designed for FIPS. When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. For more information about the NIST validation program, see Cryptographic Module Validation Program . For the latest NIST status for the individual versions of RHEL cryptographic libraries that have been submitted for validation, see Compliance Activities and Government Standards . Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL 8 computer that is configured to operate in FIPS mode. Running RHEL 9 with FIPS mode enabled to install an OpenShift Container Platform cluster is not possible. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . For the Red Hat Enterprise Linux CoreOS (RHCOS) machines in your cluster, this change is applied when the machines are deployed based on the status of an option in the install-config.yaml file, which governs the cluster options that a user can change during cluster deployment. With Red Hat Enterprise Linux (RHEL) machines, you must enable FIPS mode when you install the operating system on the machines that you plan to use as worker machines. Because FIPS must be enabled before the operating system that your cluster uses boots for the first time, you cannot enable FIPS after you deploy a cluster. 4.1. FIPS validation in OpenShift Container Platform OpenShift Container Platform uses certain FIPS validated or Modules In Process modules within RHEL and RHCOS for the operating system components that it uses. See RHEL8 core crypto components . For example, when users use SSH to connect to OpenShift Container Platform clusters and containers, those connections are properly encrypted. OpenShift Container Platform components are written in Go and built with Red Hat's golang compiler. When you enable FIPS mode for your cluster, all OpenShift Container Platform components that require cryptographic signing call RHEL and RHCOS cryptographic libraries. Table 4.1. FIPS mode attributes and limitations in OpenShift Container Platform 4.14 Attributes Limitations FIPS support in RHEL 8 and RHCOS operating systems. The FIPS implementation does not offer a single function that both computes hash functions and validates the keys that are based on that hash. This limitation will continue to be evaluated and improved in future OpenShift Container Platform releases. FIPS support in CRI-O runtimes. FIPS support in OpenShift Container Platform services. FIPS validated or Modules In Process cryptographic module and algorithms that are obtained from RHEL 8 and RHCOS binaries and images. Use of FIPS compatible golang compiler. TLS FIPS support is not complete but is planned for future OpenShift Container Platform releases. FIPS support across multiple architectures. FIPS is currently only supported on OpenShift Container Platform deployments using x86_64 , ppc64le , and s390x architectures. 4.2. FIPS support in components that the cluster uses Although the OpenShift Container Platform cluster itself uses FIPS validated or Modules In Process modules, ensure that the systems that support your OpenShift Container Platform cluster use FIPS validated or Modules In Process modules for cryptography. 4.2.1. etcd To ensure that the secrets that are stored in etcd use FIPS validated or Modules In Process encryption, boot the node in FIPS mode. After you install the cluster in FIPS mode, you can encrypt the etcd data by using the FIPS-approved aes cbc cryptographic algorithm. 4.2.2. Storage For local storage, use RHEL-provided disk encryption or Container Native Storage that uses RHEL-provided disk encryption. By storing all data in volumes that use RHEL-provided disk encryption and enabling FIPS mode for your cluster, both data at rest and data in motion, or network data, are protected by FIPS validated or Modules In Process encryption. You can configure your cluster to encrypt the root filesystem of each node, as described in Customizing nodes . 4.2.3. Runtimes To ensure that containers know that they are running on a host that is using FIPS validated or Modules In Process cryptography modules, use CRI-O to manage your runtimes. 4.3. Installing a cluster in FIPS mode To install a cluster in FIPS mode, follow the instructions to install a customized cluster on your preferred infrastructure. Ensure that you set fips: true in the install-config.yaml file before you deploy your cluster. Important To enable FIPS mode for your cluster, you must run the installation program from a RHEL computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . Amazon Web Services Alibaba Cloud Microsoft Azure Bare metal Google Cloud Platform IBM Cloud(R) IBM Power(R) IBM Z(R) and IBM(R) LinuxONE IBM Z(R) and IBM(R) LinuxONE with RHEL KVM Red Hat OpenStack Platform (RHOSP) VMware vSphere Note If you are using Azure File storage, you cannot enable FIPS mode. To apply AES CBC encryption to your etcd data store, follow the Encrypting etcd data process after you install your cluster. If you add RHEL nodes to your cluster, ensure that you enable FIPS mode on the machines before their initial boot. See Adding RHEL compute machines to an OpenShift Container Platform cluster and Enabling FIPS Mode in the RHEL 8 documentation.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installation_overview/installing-fips
Chapter 7. Installing and Configuring Certificate System
Chapter 7. Installing and Configuring Certificate System Red Hat Certificate System provides different subsystems that can be installed individually. For example, you can install multiple subsystem instances on a single server or you can run them independently on different hosts. This enables you to adapt the installation to your environment to provide a higher availability, scalability, and fail-over support. This chapter describes the package installation and how to set up the individual subsystems. The Certificate System includes the following subsystems: Certificate Authority (CA) Key Recovery Authority (KRA) Online Certificate Status Protocol (OCSP) Responder Token Key Service (TKS) Token Processing System (TPS) Each subsystem is installed and configured individually as a standalone Tomcat web server instance. However, Red Hat Certificate System additionally supports running a single shared Tomcat web server instance that can contain up to one of each subsystem. 7.1. Subsystem Configuration Order The order in which the individual subsystems are set up is important because of relationships between the different subsystems: At least one CA running as a security domain is required before any of the other public key infrastructure (PKI) subsystems can be installed. Install the OCSP after the CA has been configured. The KRA, and TKS subsystems can be installed in any order, after the CA and OCSP have been configured. The TPS subsystem depends on the CA and TKS, and optionally on the KRA and OCSP subsystem. Note In certain situations, administrators want to install a standalone KRA or OCSP which do not require a CA running as a security domain. For details, see Section 7.9, "Setting up a Standalone KRA or OCSP" .
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/installation_and_configuration
Appendix F. Pools, placement groups, and CRUSH configuration options
Appendix F. Pools, placement groups, and CRUSH configuration options The Ceph options that govern pools, placement groups, and the CRUSH algorithm. mon_allow_pool_delete Description Allows a monitor to delete a pool. In RHCS 3 and later releases, the monitor cannot delete the pool by default as an added measure to protect data. Type Boolean Default false mon_max_pool_pg_num Description The maximum number of placement groups per pool. Type Integer Default 65536 mon_pg_create_interval Description Number of seconds between PG creation in the same Ceph OSD Daemon. Type Float Default 30.0 mon_pg_stuck_threshold Description Number of seconds after which PGs can be considered as being stuck. Type 32-bit Integer Default 300 mon_pg_min_inactive Description Ceph issues a HEALTH_ERR status in the cluster log if the number of PGs that remain inactive longer than the mon_pg_stuck_threshold exceeds this setting. The default setting is one PG. A non-positive number disables this setting. Type Integer Default 1 mon_pg_warn_min_per_osd Description Ceph issues a HEALTH_WARN status in the cluster log if the average number of PGs per OSD in the cluster is less than this setting. A non-positive number disables this setting. Type Integer Default 30 mon_pg_warn_max_per_osd Description Ceph issues a HEALTH_WARN status in the cluster log if the average number of PGs per OSD in the cluster is greater than this setting. A non-positive number disables this setting. Type Integer Default 300 mon_pg_warn_min_objects Description Do not warn if the total number of objects in the cluster is below this number. Type Integer Default 1000 mon_pg_warn_min_pool_objects Description Do not warn on pools whose object number is below this number. Type Integer Default 1000 mon_pg_check_down_all_threshold Description The threshold of down OSDs by percentage after which Ceph checks all PGs to ensure they are not stuck or stale. Type Float Default 0.5 mon_pg_warn_max_object_skew Description Ceph issue a HEALTH_WARN status in the cluster log if the average number of objects in a pool is greater than mon pg warn max object skew times the average number of objects for all pools. A non-positive number disables this setting. Type Float Default 10 mon_delta_reset_interval Description The number of seconds of inactivity before Ceph resets the PG delta to zero. Ceph keeps track of the delta of the used space for each pool to aid administrators in evaluating the progress of recovery and performance. Type Integer Default 10 mon_osd_max_op_age Description The maximum age in seconds for an operation to complete before issuing a HEALTH_WARN status. Type Float Default 32.0 osd_pg_bits Description Placement group bits per Ceph OSD Daemon. Type 32-bit Integer Default 6 osd_pgp_bits Description The number of bits per Ceph OSD Daemon for Placement Groups for Placement purpose (PGPs). Type 32-bit Integer Default 6 osd_crush_chooseleaf_type Description The bucket type to use for chooseleaf in a CRUSH rule. Uses ordinal rank rather than name. Type 32-bit Integer Default 1 . Typically a host containing one or more Ceph OSD Daemons. osd_pool_default_crush_replicated_ruleset Description The default CRUSH ruleset to use when creating a replicated pool. Type 8-bit Integer Default 0 osd_pool_erasure_code_stripe_unit Description Sets the default size, in bytes, of a chunk of an object stripe for erasure coded pools. Every object of size S will be stored as N stripes, with each data chunk receiving stripe unit bytes. Each stripe of N * stripe unit bytes will be encoded/decoded individually. This option can be overridden by the stripe_unit setting in an erasure code profile. Type Unsigned 32-bit Integer Default 4096 osd_pool_default_size Description Sets the number of replicas for objects in the pool. The default value is the same as ceph osd pool set {pool-name} size {size} . Type 32-bit Integer Default 3 osd_pool_default_min_size Description Sets the minimum number of written replicas for objects in the pool in order to acknowledge a write operation to the client. If the minimum is not met, Ceph will not acknowledge the write to the client. This setting ensures a minimum number of replicas when operating in degraded mode. Type 32-bit Integer Default 0 , which means no particular minimum. If 0 , minimum is size - (size / 2) . osd_pool_default_pg_num Description The default number of placement groups for a pool. The default value is the same as pg_num with mkpool . Type 32-bit Integer Default 32 osd_pool_default_pgp_num Description The default number of placement groups for placement for a pool. The default value is the same as pgp_num with mkpool . PG and PGP should be equal. Type 32-bit Integer Default 0 osd_pool_default_flags Description The default flags for new pools. Type 32-bit Integer Default 0 osd_max_pgls Description The maximum number of placement groups to list. A client requesting a large number can tie up the Ceph OSD Daemon. Type Unsigned 64-bit Integer Default 1024 Note Default should be fine. osd_min_pg_log_entries Description The minimum number of placement group logs to maintain when trimming log files. Type 32-bit Int Unsigned Default 250 osd_default_data_pool_replay_window Description The time, in seconds, for an OSD to wait for a client to replay a request. Type 32-bit Integer Default 45
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/configuration_guide/pools-placement-groups-and-crush-configuration-options_conf
Chapter 3. Deploying OpenShift sandboxed containers workloads
Chapter 3. Deploying OpenShift sandboxed containers workloads You can install the OpenShift sandboxed containers Operator using either the web console or OpenShift CLI ( oc ). Before installing the OpenShift sandboxed containers Operator, you must prepare your OpenShift Container Platform cluster. 3.1. Prerequisites Before you install OpenShift sandboxed containers, ensure that your OpenShift Container Platform cluster meets the following requirements: Your cluster must be installed on on-premise bare-metal infrastructure with Red Hat Enterprise Linux CoreOS (RHCOS) workers. You can use any installation method including user-provisioned, installer-provisioned, or assisted installer to deploy your cluster. Note OpenShift sandboxed containers only supports RHCOS worker nodes. RHEL nodes are not supported. Nested virtualization is not supported. You can install OpenShift sandboxed containers on Amazon Web Services (AWS) bare-metal instances. Bare-metal instances offered by other cloud providers are not supported. Important Installing OpenShift sandboxed containers on AWS bare-metal instances is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.1.1. Resource requirements for OpenShift sandboxed containers OpenShift sandboxed containers lets users run workloads on their OpenShift Container Platform clusters inside a sandboxed runtime (Kata). Each pod is represented by a virtual machine (VM). Each VM runs in a QEMU process and hosts a kata-agent process that acts as a supervisor for managing container workloads, and the processes running in those containers. Two additional processes add more overhead: containerd-shim-kata-v2 is used to communicate with the pod. virtiofsd handles host file system access on behalf of the guest. Each VM is configured with a default amount of memory. Additional memory is hot-plugged into the VM for containers that explicitly request memory. A container running without a memory resource consumes free memory until the total memory used by the VM reaches the default allocation. The guest and its I/O buffers also consume memory. If a container is given a specific amount of memory, then that memory is hot-plugged into the VM before the container starts. When a memory limit is specified, the workload is terminated if it consumes more memory than the limit. If no memory limit is specified, the kernel running on the VM might run out of memory. If the kernel runs out of memory, it might terminate other processes on the VM. Default memory sizes The following table lists some the default values for resource allocation. Resource Value Memory allocated by default to a virtual machine 2Gi Guest Linux kernel memory usage at boot ~110Mi Memory used by the QEMU process (excluding VM memory) ~30Mi Memory used by the virtiofsd process (excluding VM I/O buffers) ~10Mi Memory used by the containerd-shim-kata-v2 process ~20Mi File buffer cache data after running dnf install on Fedora ~300Mi* [1] File buffers appear and are accounted for in multiple locations: In the guest where it appears as file buffer cache. In the virtiofsd daemon that maps allowed user-space file I/O operations. In the QEMU process as guest memory. Note Total memory usage is properly accounted for by the memory utilization metrics, which only count that memory once. Pod overhead describes the amount of system resources that a pod on a node uses. You can get the current pod overhead for the Kata runtime by using oc describe runtimeclass kata as shown below. Example USD oc describe runtimeclass kata Example output kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: "500Mi" cpu: "500m" You can change the pod overhead by changing the spec.overhead field for a RuntimeClass . For example, if the configuration that you run for your containers consumes more than 350Mi of memory for the QEMU process and guest kernel data, you can alter the RuntimeClass overhead to suit your needs. Note The specified default overhead values are supported by Red Hat. Changing default overhead values is not supported and can result in technical issues. When performing any kind of file system I/O in the guest, file buffers are allocated in the guest kernel. The file buffers are also mapped in the QEMU process on the host, as well as in the virtiofsd process. For example, if you use 300Mi of file buffer cache in the guest, both QEMU and virtiofsd appear to use 300Mi additional memory. However, the same memory is being used in all three cases. In other words, the total memory usage is only 300Mi, mapped in three different places. This is correctly accounted for when reporting the memory utilization metrics. Additional resources Installing a user-provisioned cluster on bare metal 3.1.2. Checking whether cluster nodes are eligible to run OpenShift sandboxed containers Before running OpenShift sandboxed containers, you can check whether the nodes in your cluster are eligible to run Kata containers. Some cluster nodes might not comply with sandboxed containers' minimum requirements. The most common reason for node ineligibility is the lack of virtualization support on the node. If you attempt to run sandboxed workloads on ineligible nodes, you will experience errors. You can use the Node Feature Discovery (NFD) Operator and a NodeFeatureDiscovery resource to automatically check node eligibility. Note If you want to install the Kata runtime on only selected worker nodes that you know are eligible, apply the feature.node.kubernetes.io/runtime.kata=true label to the selected nodes and set checkNodeEligibility: true in the KataConfig resource. Alternatively, to install the Kata runtime on all worker nodes, set checkNodeEligibility: false in the KataConfig resource. In both these scenarios, you do not need to create the NodeFeatureDiscovery resource. You should only apply the feature.node.kubernetes.io/runtime.kata=true label manually if you are sure that the node is eligible to run Kata containers. The following procedure applies the feature.node.kubernetes.io/runtime.kata=true label to all eligible nodes and configures the KataConfig resource to check for node eligibility. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the Node Feature Discovery (NFD) Operator. Procedure Create a NodeFeatureDiscovery resource to detect node capabilities suitable for running Kata containers: Save the following YAML in the nfd.yaml file: apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-kata namespace: openshift-nfd spec: operand: image: quay.io/openshift/origin-node-feature-discovery:4.10 imagePullPolicy: Always servicePort: 12000 workerConfig: configData: | sources: custom: - name: "feature.node.kubernetes.io/runtime.kata" matchOn: - cpuId: ["SSE4", "VMX"] loadedKMod: ["kvm", "kvm_intel"] - cpuId: ["SSE4", "SVM"] loadedKMod: ["kvm", "kvm_amd"] Create the NodeFeatureDiscovery custom resource (CR): USD oc create -f nfd.yaml Example output nodefeaturediscovery.nfd.openshift.io/nfd-kata created A feature.node.kubernetes.io/runtime.kata=true label is applied to all qualifying worker nodes. Set the checkNodeEligibility field to true in the KataConfig resource to enable the feature, for example: Save the following YAML in the kata-config.yaml file: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: checkNodeEligibility: true Create the KataConfig CR: USD oc create -f kata-config.yaml Example output kataconfig.kataconfiguration.openshift.io/example-kataconfig created Verification Verify that qualifying nodes in the cluster have the correct label applied: USD oc get nodes --selector='feature.node.kubernetes.io/runtime.kata=true' Example output NAME STATUS ROLES AGE VERSION compute-3.example.com Ready worker 4h38m v1.23.3+e419edf compute-2.example.com Ready worker 4h35m v1.23.3+e419edf Additional resources For more information about installing the Node Feature Discovery (NFD) Operator, see Installing NFD . 3.2. Deploying OpenShift sandboxed containers workloads using the web console You can deploy OpenShift sandboxed containers workloads from the web console. First, you must install the OpenShift sandboxed containers Operator, then create the KataConfig custom resource (CR). Once you are ready to deploy a workload in a sandboxed container, you must manually add kata as the runtimeClassName to the workload YAML file. 3.2.1. Installing the OpenShift sandboxed containers Operator using the web console You can install the OpenShift sandboxed containers Operator from the OpenShift Container Platform web console. Prerequisites You have OpenShift Container Platform 4.10 installed. You have access to the cluster as a user with the cluster-admin role. Procedure From the Administrator perspective in the web console, navigate to Operators OperatorHub . In the Filter by keyword field, type OpenShift sandboxed containers . Select the OpenShift sandboxed containers tile. Read the information about the Operator and click Install . On the Install Operator page: Select stable-1.2 from the list of available Update Channel options. Verify that Operator recommended Namespace is selected for Installed Namespace . This installs the Operator in the mandatory openshift-sandboxed-containers-operator namespace. If this namespace does not yet exist, it is automatically created. Note Attempting to install the OpenShift sandboxed containers Operator in a namespace other than openshift-sandboxed-containers-operator causes the installation to fail. Verify that Automatic is selected for Approval Strategy . Automatic is the default value, and enables automatic updates to OpenShift sandboxed containers when a new z-stream release is available. Click Install . The OpenShift sandboxed containers Operator is now installed on your cluster. Verification From the Administrator perspective in the web console, navigate to Operators Installed Operators . Verify that the OpenShift sandboxed containers Operator is listed in the in operators list. 3.2.2. Creating the KataConfig custom resource in the web console You must create one KataConfig custom resource (CR) to enable installing kata as a RuntimeClass on your cluster nodes. Important Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard drive rather than on an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU or network. Prerequisites You have installed OpenShift Container Platform 4.10 on your cluster. You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift sandboxed containers Operator. Note Kata is installed on all worker nodes by default. If you want to install kata as a RuntimeClass only on specific nodes, you can add labels to those nodes, then define the label in the KataConfig CR when you create it. Procedure From the Administrator perspective in the web console, navigate to Operators Installed Operators . Select the OpenShift sandboxed containers Operator from the list of operators. In the KataConfig tab, click Create KataConfig . In the Create KataConfig page, select to configure the KataConfig CR via YAML view . Copy and paste the following manifest into the YAML view : apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataMonitorImage: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.2.0 If you want to install kata as a RuntimeClass only on selected nodes, include the label in the manifest: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataMonitorImage: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.2.0 kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1 1 Labels in kataConfigPoolSelector only support single values; nodeSelector syntax is not supported. Click Create . The new KataConfig CR is created and begins to install kata as a RuntimeClass on the worker nodes. Wait for the kata installation to complete and the worker nodes to reboot before continuing to the step. Important OpenShift sandboxed containers installs Kata only as a secondary, optional runtime on the cluster and not as the primary runtime. Verification In the KataConfig tab, select the new KataConfig CR. In the KataConfig page, select the YAML tab. Monitor the installationStatus field in the status. A message appears each time there is an update. Click Reload to view the updated KataConfig CR. Once the value of Completed nodes equals the number of worker or labeled nodes, the installation is complete. The status also contains a list of nodes where the installation is completed. 3.2.3. Deploying a workload in a sandboxed container using the web console OpenShift sandboxed containers installs Kata as a secondary, optional runtime on your cluster, and not as the primary runtime. To deploy a pod-templated workload in a sandboxed container, you must manually add kata as the runtimeClassName to the workload YAML file. Prerequisites You have installed OpenShift Container Platform 4.10 on your cluster. You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift sandboxed containers Operator. You have created a KataConfig custom resource (CR). Procedure From the Administrator perspective in the web console, expand Workloads and select the type of workload you want to create. In the workload page, click to create the workload. In the YAML file for the workload, in the spec field where the container is listed, add runtimeClassName: kata . Example for Pod object apiVersion: v1 kind: Pod metadata: name: hello-openshift labels: app: hello-openshift spec: runtimeClassName: kata containers: - name: hello-openshift image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8888 securityContext: privileged: false allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL seccompProfile: type: RuntimeDefault Click Save . OpenShift Container Platform creates the workload and begins scheduling it. 3.3. Deploying OpenShift sandboxed containers workloads using the CLI You can deploy OpenShift sandboxed containers workloads using the CLI. First, you must install the OpenShift sandboxed containers Operator, then create the KataConfig custom resource. Once you are ready to deploy a workload in a sandboxed container, you must add kata as the runtimeClassName to the workload YAML file. 3.3.1. Installing the OpenShift sandboxed containers Operator using the CLI You can install the OpenShift sandboxed containers Operator using the OpenShift Container Platform CLI. Prerequisites You have OpenShift Container Platform 4.10 installed on your cluster. You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have subscribed to the OpenShift sandboxed containers catalog. Note Subscribing to the OpenShift sandboxed containers catalog provides openshift-sandboxed-containers-operator namespace access to the OpenShift sandboxed containers Operator. Procedure Create the Namespace object for the OpenShift sandboxed containers Operator. Create a Namespace object YAML file that contains the following manifest: apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator Create the Namespace object: USD oc create -f Namespace.yaml Create the OperatorGroup object for the OpenShift sandboxed containers Operator. Create an OperatorGroup object YAML file that contains the following manifest: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator Create the OperatorGroup object: USD oc create -f OperatorGroup.yaml Create the Subscription object to subscribe the Namespace to the OpenShift sandboxed containers Operator. Create a Subscription object YAML file that contains the following manifest: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: "stable-1.2" installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.2.2 Create the Subscription object: USD oc create -f Subscription.yaml The OpenShift sandboxed containers Operator is now installed on your cluster. Note All the object file names listed above are suggestions. You can create the object YAML files using other names. Verification Ensure that the Operator is correctly installed: USD oc get csv -n openshift-sandboxed-containers-operator Example output NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.2.2 1.2.1 Succeeded Additional resources Installing from OperatorHub using the CLI 3.3.2. Creating the KataConfig custom resource using the CLI You must create one KataConfig custom resource (CR) to install kata as a RuntimeClass on your nodes. Creating the KataConfig CR triggers the OpenShift sandboxed containers Operator to do the following: Install the needed RHCOS extensions, such as QEMU and kata-containers , on your RHCOS node. Ensure that the CRI-O runtime is configured with the correct kata runtime handlers. Create a RuntimeClass CR named kata with a default configuration. This enables users to configure workloads to use kata as the runtime by referencing the CR in the RuntimeClassName field. This CR also specifies the resource overhead for the runtime. Note Kata is installed on all worker nodes by default. If you want to install kata as a RuntimeClass only on specific nodes, you can add labels to those nodes, then define the label in the KataConfig CR when you create it. Prerequisites You have installed OpenShift Container Platform 4.10 on your cluster. You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift sandboxed containers Operator. Important Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard drive rather than on an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU or network. Procedure Create a YAML file with the following manifest: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataMonitorImage: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.2.0 (Optional) If you want to install kata as a RuntimeClass only on selected nodes, create a YAML file that includes the label in the manifest: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataMonitorImage: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.2.0 kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1 1 Labels in kataConfigPoolSelector only support single values; nodeSelector syntax is not supported. Create the KataConfig resource: USD oc create -f <file name>.yaml The new KataConfig CR is created and begins to install kata as a RuntimeClass on the worker nodes. Wait for the `kata`installation to complete and the worker nodes to reboot before continuing to the step. Important OpenShift sandboxed containers installs Kata only as a secondary, optional runtime on the cluster and not as the primary runtime. Verification Monitor the installation progress: USD watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p" Once the value of Is In Progress appears as false , the installation is complete. Additional resources Understanding how to update labels on nodes 3.3.3. Deploying a workload in a sandboxed container using the CLI OpenShift sandboxed containers installs Kata as a secondary, optional runtime on your cluster, and not as the primary runtime. To deploy a pod-templated workload in a sandboxed container, you must add kata as the runtimeClassName to the workload YAML file. Prerequisites You have installed OpenShift Container Platform 4.10 on your cluster. You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift sandboxed containers Operator. You have created a KataConfig custom resource (CR). Procedure Add runtimeClassName: kata to any pod-templated object: Pod objects ReplicaSet objects ReplicationController objects StatefulSet objects Deployment objects DeploymentConfig objects Example for Pod objects apiVersion: v1 kind: Pod metadata: name: hello-openshift labels: app: hello-openshift spec: runtimeClassName: kata containers: - name: hello-openshift image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8888 securityContext: privileged: false allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL seccompProfile: type: RuntimeDefault OpenShift Container Platform creates the workload and begins scheduling it. Verification Inspect the runtimeClassName field on a pod-templated object. If the runtimeClassName is kata , then the workload is running on a OpenShift sandboxed containers. 3.4. Additional resources The OpenShift sandboxed containers Operator is supported in a restricted network environment. For more information, Using Operator Lifecycle Manager on restricted networks . When using a disconnected cluster on a restricted network, you must configure proxy support in Operator Lifecycle Manager to access the OperatorHub. Using a proxy allows the cluster to fetch the OpenShift sandboxed containers Operator.
[ "oc describe runtimeclass kata", "kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: kata overhead: podFixed: memory: \"500Mi\" cpu: \"500m\"", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-kata namespace: openshift-nfd spec: operand: image: quay.io/openshift/origin-node-feature-discovery:4.10 imagePullPolicy: Always servicePort: 12000 workerConfig: configData: | sources: custom: - name: \"feature.node.kubernetes.io/runtime.kata\" matchOn: - cpuId: [\"SSE4\", \"VMX\"] loadedKMod: [\"kvm\", \"kvm_intel\"] - cpuId: [\"SSE4\", \"SVM\"] loadedKMod: [\"kvm\", \"kvm_amd\"]", "oc create -f nfd.yaml", "nodefeaturediscovery.nfd.openshift.io/nfd-kata created", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: checkNodeEligibility: true", "oc create -f kata-config.yaml", "kataconfig.kataconfiguration.openshift.io/example-kataconfig created", "oc get nodes --selector='feature.node.kubernetes.io/runtime.kata=true'", "NAME STATUS ROLES AGE VERSION compute-3.example.com Ready worker 4h38m v1.23.3+e419edf compute-2.example.com Ready worker 4h35m v1.23.3+e419edf", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataMonitorImage: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.2.0", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataMonitorImage: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.2.0 kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1", "apiVersion: v1 kind: Pod metadata: name: hello-openshift labels: app: hello-openshift spec: runtimeClassName: kata containers: - name: hello-openshift image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8888 securityContext: privileged: false allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL seccompProfile: type: RuntimeDefault", "apiVersion: v1 kind: Namespace metadata: name: openshift-sandboxed-containers-operator", "oc create -f Namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: targetNamespaces: - openshift-sandboxed-containers-operator", "oc create -f OperatorGroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-sandboxed-containers-operator namespace: openshift-sandboxed-containers-operator spec: channel: \"stable-1.2\" installPlanApproval: Automatic name: sandboxed-containers-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: sandboxed-containers-operator.v1.2.2", "oc create -f Subscription.yaml", "oc get csv -n openshift-sandboxed-containers-operator", "NAME DISPLAY VERSION REPLACES PHASE openshift-sandboxed-containers openshift-sandboxed-containers-operator 1.2.2 1.2.1 Succeeded", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataMonitorImage: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.2.0", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: cluster-kataconfig spec: kataMonitorImage: registry.redhat.io/openshift-sandboxed-containers/osc-monitor-rhel8:1.2.0 kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1", "oc create -f <file name>.yaml", "watch \"oc describe kataconfig | sed -n /^Status:/,/^Events/p\"", "apiVersion: v1 kind: Pod metadata: name: hello-openshift labels: app: hello-openshift spec: runtimeClassName: kata containers: - name: hello-openshift image: quay.io/openshift/origin-hello-openshift ports: - containerPort: 8888 securityContext: privileged: false allowPrivilegeEscalation: false runAsNonRoot: true runAsUser: 1001 capabilities: drop: - ALL seccompProfile: type: RuntimeDefault" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/sandboxed_containers_support_for_openshift/deploying-sandboxed-containers-workloads
Chapter 16. Configuring OpenShift connection timeout
Chapter 16. Configuring OpenShift connection timeout By default, the OpenShift route is configured to time out HTTP requests that are longer than 30 seconds. This may cause session timeout issues in Business Central resulting in the following behaviors: "Unable to complete your request. The following exception occurred: (TypeError) : Cannot read property 'indexOf' of null." "Unable to complete your request. The following exception occurred: (TypeError) : b is null." A blank page is displayed when clicking the Project or Server links in Business Central. All Business Central templates already include extended timeout configuration. To configure longer timeout on Business Central OpenShift routes, add the haproxy.router.openshift.io/timeout: 60s annotation on the target route: - kind: Route apiVersion: v1 id: "USDAPPLICATION_NAME-rhpamcentr-http" metadata: name: "USDAPPLICATION_NAME-rhpamcentr" labels: application: "USDAPPLICATION_NAME" annotations: description: Route for Business Central's http service. haproxy.router.openshift.io/timeout: 60s spec: host: "USDBUSINESS_CENTRAL_HOSTNAME_HTTP" to: name: "USDAPPLICATION_NAME-rhpamcentr" For a full list of global route-specific timeout annotations, see the OpenShift Documentation .
[ "- kind: Route apiVersion: v1 id: \"USDAPPLICATION_NAME-rhpamcentr-http\" metadata: name: \"USDAPPLICATION_NAME-rhpamcentr\" labels: application: \"USDAPPLICATION_NAME\" annotations: description: Route for Business Central's http service. haproxy.router.openshift.io/timeout: 60s spec: host: \"USDBUSINESS_CENTRAL_HOSTNAME_HTTP\" to: name: \"USDAPPLICATION_NAME-rhpamcentr\"" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/configuring-openshift-connection-timeout-proc
4.17. Cron
4.17. Cron 4.17.1. Vixie cron and Cronie Red Hat Enterprise Linux 6 includes the cronie package as a replacement for vixie-cron . The main difference between these packages is how the regular jobs (daily, weekly, monthly) are done. Cronie uses the /etc/anacrontab file, which by default looks like the following: These regular jobs will be executed once a day in the 03:00-22:00 time interval, including a random delay. For example, cron.daily will have a 5 minute forced delay plus a random delay of 0-45 minutes. You could also run jobs with no delays, between 4 and 5: Features of cronie include: Random delay for starting the job in /etc/anacrontab . Time range of regular jobs can be defined in /etc/anacrontab . Each cron table can have its own defined time zone with the CRON_TZ variable. By default, the cron daemon checks for changes in tables with inotify. For further details about cronie and cronie-anacron , see the Red Hat Enterprise Linux Deployment Guide.
[ "the maximal random delay added to the base delay of the jobs RANDOM_DELAY=45 the jobs will be started during the following hours only START_HOURS_RANGE=3-22 period in days delay in minutes job-identifier command 1 5 cron.daily nice run-parts /etc/cron.daily 7 25 cron.weekly nice run-parts /etc/cron.weekly @monthly 45 cron.monthly nice run-parts /etc/cron.monthly", "RANDOM_DELAY=0 # or do not use this option at all START_HOURS_RANGE=4-5 period in days delay in minutes job-identifier command 1 0 cron.daily nice run-parts /etc/cron.daily 7 0 cron.weekly nice run-parts /etc/cron.weekly @monthly 0 cron.monthly nice run-parts /etc/cron.monthly" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-networking-cron
Chapter 13. Using the Stream Control Transmission Protocol (SCTP)
Chapter 13. Using the Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can use the Stream Control Transmission Protocol (SCTP) on a bare-metal cluster. 13.1. Support for SCTP on OpenShift Container Platform As a cluster administrator, you can enable SCTP on the hosts in the cluster. On Red Hat Enterprise Linux CoreOS (RHCOS), the SCTP module is disabled by default. SCTP is a reliable message based protocol that runs on top of an IP network. When enabled, you can use SCTP as a protocol with pods, services, and network policy. A Service object must be defined with the type parameter set to either the ClusterIP or NodePort value. 13.1.1. Example configurations using SCTP protocol You can configure a pod or service to use SCTP by setting the protocol parameter to the SCTP value in the pod or service object. In the following example, a pod is configured to use SCTP: apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ... ports: - containerPort: 30100 name: sctpserver protocol: SCTP In the following example, a service is configured to use SCTP: apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ... ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP In the following example, a NetworkPolicy object is configured to apply to SCTP network traffic on port 80 from any pods with a specific label: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80 13.2. Enabling Stream Control Transmission Protocol (SCTP) As a cluster administrator, you can load and enable the blacklisted SCTP kernel module on worker nodes in your cluster. Prerequisites Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a file named load-sctp-module.yaml that contains the following YAML definition: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp To create the MachineConfig object, enter the following command: USD oc create -f load-sctp-module.yaml Optional: To watch the status of the nodes while the MachineConfig Operator applies the configuration change, enter the following command. When the status of a node transitions to Ready , the configuration update is applied. USD oc get nodes 13.3. Verifying Stream Control Transmission Protocol (SCTP) is enabled You can verify that SCTP is working on a cluster by creating a pod with an application that listens for SCTP traffic, associating it with a service, and then connecting to the exposed service. Prerequisites Access to the internet from the cluster to install the nc package. Install the OpenShift CLI ( oc ). Access to the cluster as a user with the cluster-admin role. Procedure Create a pod starts an SCTP listener: Create a file named sctp-server.yaml that defines a pod with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP Create the pod by entering the following command: USD oc create -f sctp-server.yaml Create a service for the SCTP listener pod. Create a file named sctp-service.yaml that defines a service with the following YAML: apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102 To create the service, enter the following command: USD oc create -f sctp-service.yaml Create a pod for the SCTP client. Create a file named sctp-client.yaml with the following YAML: apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: ["/bin/sh", "-c"] args: ["dnf install -y nc && sleep inf"] To create the Pod object, enter the following command: USD oc apply -f sctp-client.yaml Run an SCTP listener on the server. To connect to the server pod, enter the following command: USD oc rsh sctpserver To start the SCTP listener, enter the following command: USD nc -l 30102 --sctp Connect to the SCTP listener on the server. Open a new terminal window or tab in your terminal program. Obtain the IP address of the sctpservice service. Enter the following command: USD oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{"\n"}}' To connect to the client pod, enter the following command: USD oc rsh sctpclient To start the SCTP client, enter the following command. Replace <cluster_IP> with the cluster IP address of the sctpservice service. # nc <cluster_IP> 30102 --sctp
[ "apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP", "apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp", "oc create -f load-sctp-module.yaml", "oc get nodes", "apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP", "oc create -f sctp-server.yaml", "apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102", "oc create -f sctp-service.yaml", "apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi9/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]", "oc apply -f sctp-client.yaml", "oc rsh sctpserver", "nc -l 30102 --sctp", "oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'", "oc rsh sctpclient", "nc <cluster_IP> 30102 --sctp" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/networking/using-sctp
Chapter 6. Control plane architecture
Chapter 6. Control plane architecture The control plane , which is composed of control plane machines, manages the OpenShift Container Platform cluster. The control plane machines manage workloads on the compute machines, which are also known as worker machines. The cluster itself manages all upgrades to the machines by the actions of the Cluster Version Operator (CVO), the Machine Config Operator, and a set of individual Operators. 6.1. Node configuration management with machine config pools Machines that run control plane components or user workloads are divided into groups based on the types of resources they handle. These groups of machines are called machine config pools (MCP). Each MCP manages a set of nodes and its corresponding machine configs. The role of the node determines which MCP it belongs to; the MCP governs nodes based on its assigned node role label. Nodes in an MCP have the same configuration; this means nodes can be scaled up and torn down in response to increased or decreased workloads. By default, there are two MCPs created by the cluster when it is installed: master and worker . Each default MCP has a defined configuration applied by the Machine Config Operator (MCO), which is responsible for managing MCPs and facilitating MCP updates. For worker nodes, you can create additional MCPs, or custom pools, to manage nodes with custom use cases that extend outside of the default node types. Custom MCPs for the control plane nodes are not supported. Custom pools are pools that inherit their configurations from the worker pool. They use any machine config targeted for the worker pool, but add the ability to deploy changes only targeted at the custom pool. Since a custom pool inherits its configuration from the worker pool, any change to the worker pool is applied to the custom pool as well. Custom pools that do not inherit their configurations from the worker pool are not supported by the MCO. Note A node can only be included in one MCP. If a node has multiple labels that correspond to several MCPs, like worker,infra , it is managed by the infra custom pool, not the worker pool. Custom pools take priority on selecting nodes to manage based on node labels; nodes that do not belong to a custom pool are managed by the worker pool. It is recommended to have a custom pool for every node role you want to manage in your cluster. For example, if you create infra nodes to handle infra workloads, it is recommended to create a custom infra MCP to group those nodes together. If you apply an infra role label to a worker node so it has the worker,infra dual label, but do not have a custom infra MCP, the MCO considers it a worker node. If you remove the worker label from a node and apply the infra label without grouping it in a custom pool, the node is not recognized by the MCO and is unmanaged by the cluster. Important Any node labeled with the infra role that is only running infra workloads is not counted toward the total number of subscriptions. The MCP managing an infra node is mutually exclusive from how the cluster determines subscription charges; tagging a node with the appropriate infra role and using taints to prevent user workloads from being scheduled on that node are the only requirements for avoiding subscription charges for infra workloads. The MCO applies updates for pools independently; for example, if there is an update that affects all pools, nodes from each pool update in parallel with each other. If you add a custom pool, nodes from that pool also attempt to update concurrently with the master and worker nodes. There might be situations where the configuration on a node does not fully match what the currently-applied machine config specifies. This state is called configuration drift . The Machine Config Daemon (MCD) regularly checks the nodes for configuration drift. If the MCD detects configuration drift, the MCO marks the node degraded until an administrator corrects the node configuration. A degraded node is online and operational, but, it cannot be updated. Additional resources Understanding configuration drift detection 6.2. Machine roles in OpenShift Container Platform OpenShift Container Platform assigns hosts different roles. These roles define the function of the machine within the cluster. The cluster contains definitions for the standard master and worker role types. Note The cluster also contains the definition for the bootstrap role. Because the bootstrap machine is used only during cluster installation, its function is explained in the cluster installation documentation. 6.2.1. Control plane and node host compatibility The OpenShift Container Platform version must match between control plane host and node host. For example, in a 4.18 cluster, all control plane hosts must be 4.18 and all nodes must be 4.18. Temporary mismatches during cluster upgrades are acceptable. For example, when upgrading from the OpenShift Container Platform version to 4.18, some nodes will upgrade to 4.18 before others. Prolonged skewing of control plane hosts and node hosts might expose older compute machines to bugs and missing features. Users should resolve skewed control plane hosts and node hosts as soon as possible. The kubelet service must not be newer than kube-apiserver , and can be up to two minor versions older depending on whether your OpenShift Container Platform version is odd or even. The table below shows the appropriate version compatibility: OpenShift Container Platform version Supported kubelet skew Odd OpenShift Container Platform minor versions [1] Up to one version older Even OpenShift Container Platform minor versions [2] Up to two versions older For example, OpenShift Container Platform 4.11, 4.13. For example, OpenShift Container Platform 4.10, 4.12. 6.2.2. Cluster workers In a Kubernetes cluster, worker nodes run and manage the actual workloads requested by Kubernetes users. The worker nodes advertise their capacity and the scheduler, which is a control plane service, determines on which nodes to start pods and containers. The following important services run on each worker node: CRI-O, which is the container engine. kubelet, which is the service that accepts and fulfills requests for running and stopping container workloads. A service proxy, which manages communication for pods across workers. The crun or runC low-level container runtime, which creates and runs containers. Note For information about how to enable runC instead of the default crun, see the documentation for creating a ContainerRuntimeConfig CR. In OpenShift Container Platform, compute machine sets control the compute machines, which are assigned the worker machine role. Machines with the worker role drive compute workloads that are governed by a specific machine pool that autoscales them. Because OpenShift Container Platform has the capacity to support multiple machine types, the machines with the worker role are classed as compute machines. In this release, the terms worker machine and compute machine are used interchangeably because the only default type of compute machine is the worker machine. In future versions of OpenShift Container Platform, different types of compute machines, such as infrastructure machines, might be used by default. Note Compute machine sets are groupings of compute machine resources under the machine-api namespace. Compute machine sets are configurations that are designed to start new compute machines on a specific cloud provider. Conversely, machine config pools (MCPs) are part of the Machine Config Operator (MCO) namespace. An MCP is used to group machines together so the MCO can manage their configurations and facilitate their upgrades. 6.2.3. Cluster control planes In a Kubernetes cluster, the master nodes run services that are required to control the Kubernetes cluster. In OpenShift Container Platform, the control plane is comprised of control plane machines that have a master machine role. They contain more than just the Kubernetes services for managing the OpenShift Container Platform cluster. For most OpenShift Container Platform clusters, control plane machines are defined by a series of standalone machine API resources. For supported cloud provider and OpenShift Container Platform version combinations, control planes can be managed with control plane machine sets. Extra controls apply to control plane machines to prevent you from deleting all of the control plane machines and breaking your cluster. Note Exactly three control plane nodes must be used for all production deployments. However, on bare metal platforms, clusters can be scaled up to five control plane nodes. Services that fall under the Kubernetes category on the control plane include the Kubernetes API server, etcd, the Kubernetes controller manager, and the Kubernetes scheduler. Table 6.1. Kubernetes services that run on the control plane Component Description Kubernetes API server The Kubernetes API server validates and configures the data for pods, services, and replication controllers. It also provides a focal point for the shared state of the cluster. etcd etcd stores the persistent control plane state while other components watch etcd for changes to bring themselves into the specified state. Kubernetes controller manager The Kubernetes controller manager watches etcd for changes to objects such as replication, namespace, and service account controller objects, and then uses the API to enforce the specified state. Several such processes create a cluster with one active leader at a time. Kubernetes scheduler The Kubernetes scheduler watches for newly created pods without an assigned node and selects the best node to host the pod. There are also OpenShift services that run on the control plane, which include the OpenShift API server, OpenShift controller manager, OpenShift OAuth API server, and OpenShift OAuth server. Table 6.2. OpenShift services that run on the control plane Component Description OpenShift API server The OpenShift API server validates and configures the data for OpenShift resources, such as projects, routes, and templates. The OpenShift API server is managed by the OpenShift API Server Operator. OpenShift controller manager The OpenShift controller manager watches etcd for changes to OpenShift objects, such as project, route, and template controller objects, and then uses the API to enforce the specified state. The OpenShift controller manager is managed by the OpenShift Controller Manager Operator. OpenShift OAuth API server The OpenShift OAuth API server validates and configures the data to authenticate to OpenShift Container Platform, such as users, groups, and OAuth tokens. The OpenShift OAuth API server is managed by the Cluster Authentication Operator. OpenShift OAuth server Users request tokens from the OpenShift OAuth server to authenticate themselves to the API. The OpenShift OAuth server is managed by the Cluster Authentication Operator. Some of these services on the control plane machines run as systemd services, while others run as static pods. Systemd services are appropriate for services that you need to always come up on that particular system shortly after it starts. For control plane machines, those include sshd, which allows remote login. It also includes services such as: The CRI-O container engine (crio), which runs and manages the containers. OpenShift Container Platform 4.18 uses CRI-O instead of the Docker Container Engine. Kubelet (kubelet), which accepts requests for managing containers on the machine from control plane services. CRI-O and Kubelet must run directly on the host as systemd services because they need to be running before you can run other containers. The installer-* and revision-pruner-* control plane pods must run with root permissions because they write to the /etc/kubernetes directory, which is owned by the root user. These pods are in the following namespaces: openshift-etcd openshift-kube-apiserver openshift-kube-controller-manager openshift-kube-scheduler Additional resources Hosted control planes overview 6.3. Operators in OpenShift Container Platform Operators are among the most important components of OpenShift Container Platform. They are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and the OpenShift CLI ( oc ). They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. Operators also offer a more granular configuration experience. You configure each component by modifying the API that the Operator exposes instead of modifying a global configuration file. Because CRI-O and the Kubelet run on every node, almost every other cluster function can be managed on the control plane by using Operators. Components that are added to the control plane by using Operators include critical networking and credential services. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators Managed by the Cluster Version Operator (CVO) and installed by default to perform cluster functions. Optional add-on Operators Managed by Operator Lifecycle Manager (OLM) and can be made accessible for users to run in their applications. Also known as OLM-based Operators . 6.3.1. Cluster Operators In OpenShift Container Platform, all cluster functions are divided into a series of default cluster Operators . Cluster Operators manage a particular area of cluster functionality, such as cluster-wide application logging, management of the Kubernetes control plane, or the machine provisioning system. Cluster Operators are represented by a ClusterOperator object, which cluster administrators can view in the OpenShift Container Platform web console from the Administration Cluster Settings page. Each cluster Operator provides a simple API for determining cluster functionality. The Operator hides the details of managing the lifecycle of that component. Operators can manage a single component or tens of components, but the end goal is always to reduce operational burden by automating common actions. Additional resources Cluster Operators reference 6.3.2. Add-on Operators Operator Lifecycle Manager (OLM) and OperatorHub are default components in OpenShift Container Platform that help manage Kubernetes-native applications as Operators. Together they provide the system for discovering, installing, and managing the optional add-on Operators available on the cluster. Using OperatorHub in the OpenShift Container Platform web console, cluster administrators and authorized users can select Operators to install from catalogs of Operators. After installing an Operator from OperatorHub, it can be made available globally or in specific namespaces to run in user applications. Default catalog sources are available that include Red Hat Operators, certified Operators, and community Operators. Cluster administrators can also add their own custom catalog sources, which can contain a custom set of Operators. Developers can use the Operator SDK to help author custom Operators that take advantage of OLM features, as well. Their Operator can then be bundled and added to a custom catalog source, which can be added to a cluster and made available to users. Note OLM does not manage the cluster Operators that comprise the OpenShift Container Platform architecture. Additional resources For more details on running add-on Operators in OpenShift Container Platform, see the Operators guide sections on Operator Lifecycle Manager (OLM) and OperatorHub . For more details on the Operator SDK, see Developing Operators . 6.4. Overview of etcd etcd is a consistent, distributed key-value store that holds small amounts of data that can fit entirely in memory. Although etcd is a core component of many projects, it is the primary data store for Kubernetes, which is the standard system for container orchestration. 6.4.1. Benefits of using etcd By using etcd, you can benefit in several ways: Maintain consistent uptime for your cloud-native applications, and keep them working even if individual servers fail Store and replicate all cluster states for Kubernetes Distribute configuration data to provide redundancy and resiliency for the configuration of nodes 6.4.2. How etcd works To ensure a reliable approach to cluster configuration and management, etcd uses the etcd Operator. The Operator simplifies the use of etcd on a Kubernetes container platform like OpenShift Container Platform. With the etcd Operator, you can create or delete etcd members, resize clusters, perform backups, and upgrade etcd. The etcd Operator observes, analyzes, and acts: It observes the cluster state by using the Kubernetes API. It analyzes differences between the current state and the state that you want. It fixes the differences through the etcd cluster management APIs, the Kubernetes API, or both. etcd holds the cluster state, which is constantly updated. This state is continuously persisted, which leads to a high number of small changes at high frequency. As a result, it is critical to back the etcd cluster member with fast, low-latency I/O. For more information about best practices for etcd, see "Recommended etcd practices". Additional resources Recommended etcd practices Backing up etcd
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/architecture/control-plane
Chapter 2. Installing the Virtualization Packages
Chapter 2. Installing the Virtualization Packages To use virtualization, Red Hat virtualization packages must be installed on your computer. Virtualization packages can be installed when installing Red Hat Enterprise Linux or after installation using the yum command and the Subscription Manager application. The KVM hypervisor uses the default Red Hat Enterprise Linux kernel with the kvm kernel module. 2.1. Installing Virtualization Packages During a Red Hat Enterprise Linux Installation This section provides information about installing virtualization packages while installing Red Hat Enterprise Linux. Note For detailed information about installing Red Hat Enterprise Linux, see the Red Hat Enterprise Linux 7 Installation Guide . Important The Anaconda interface only offers the option to install Red Hat virtualization packages during the installation of Red Hat Enterprise Linux Server. When installing a Red Hat Enterprise Linux Workstation, the Red Hat virtualization packages can only be installed after the workstation installation is complete. See Section 2.2, "Installing Virtualization Packages on an Existing Red Hat Enterprise Linux System" Procedure 2.1. Installing virtualization packages Select software Follow the installation procedure until the Installation Summary screen. Figure 2.1. The Installation Summary screen In the Installation Summary screen, click Software Selection . The Software Selection screen opens. Select the server type and package groups You can install Red Hat Enterprise Linux 7 with only the basic virtualization packages or with packages that allow management of guests through a graphical user interface. Do one of the following: Install a minimal virtualization host Select the Virtualization Host radio button in the Base Environment pane and the Virtualization Platform check box in the Add-Ons for Selected Environment pane. This installs a basic virtualization environment which can be run with virsh or remotely over the network. Figure 2.2. Virtualization Host selected in the Software Selection screen Install a virtualization host with a graphical user interface Select the Server with GUI radio button in the Base Environment pane and the Virtualization Client , Virtualization Hypervisor , and Virtualization Tools check boxes in the Add-Ons for Selected Environment pane. This installs a virtualization environment along with graphical tools for installing and managing guest virtual machines. Figure 2.3. Server with GUI selected in the software selection screen Finalize installation Click Done and continue with the installation. Important You need a valid Red Hat Enterprise Linux subscription to receive updates for the virtualization packages. 2.1.1. Installing KVM Packages with Kickstart Files To use a Kickstart file to install Red Hat Enterprise Linux with the virtualization packages, append the following package groups in the %packages section of your Kickstart file: For more information about installing with Kickstart files, see the Red Hat Enterprise Linux 7 Installation Guide .
[ "@virtualization-hypervisor @virtualization-client @virtualization-platform @virtualization-tools" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-Installing_the_virtualization_packages
Chapter 9. Using the Node Tuning Operator
Chapter 9. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. 9.1. About the Node Tuning Operator The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 9.2. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 9.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 9.4. Verifying that the TuneD profiles are applied Verify the TuneD profiles that are applied to your cluster node. USD oc get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator Example output NAME TUNED APPLIED DEGRADED AGE master-0 openshift-control-plane True False 6h33m master-1 openshift-control-plane True False 6h33m master-2 openshift-control-plane True False 6h33m worker-a openshift-node True False 6h28m worker-b openshift-node True False 6h28m NAME : Name of the Profile object. There is one Profile object per node and their names match. TUNED : Name of the desired TuneD profile to apply. APPLIED : True if the TuneD daemon applied the desired profile. ( True/False/Unknown ). DEGRADED : True if any errors were reported during application of the TuneD profile ( True/False/Unknown ). AGE : Time elapsed since the creation of Profile object. The ClusterOperator/node-tuning object also contains useful information about the Operator and its node agents' health. For example, Operator misconfiguration is reported by ClusterOperator/node-tuning status messages. To get status information about the ClusterOperator/node-tuning object, run the following command: USD oc get co/node-tuning -n openshift-cluster-node-tuning-operator Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE node-tuning 4.18.1 True False True 60m 1/5 Profiles with bootcmdline conflict If either the ClusterOperator/node-tuning or a profile object's status is DEGRADED , additional information is provided in the Operator or operand logs. 9.5. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: Node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: Machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/ocp-tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 9.6. Custom tuning examples Using TuneD profiles from the default CR The following CR applies custom node-level tuning for OpenShift Container Platform nodes with label tuned.openshift.io/ingress-node-label set to any value. Example: custom tuning using the openshift-control-plane TuneD profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range="1024 65535" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress Important Custom profile writers are strongly encouraged to include the default TuneD daemon profiles shipped within the default Tuned CR. The example above uses the default openshift-control-plane profile to accomplish this. Using built-in TuneD profiles Given the successful rollout of the NTO-managed daemon set, the TuneD operands all manage the same version of the TuneD daemon. To list the built-in TuneD profiles supported by the daemon, query any TuneD pod in the following way: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\n' | sed 's|^.*/||' You can use the profile names retrieved by this in your custom tuning specification. Example: using built-in hpc-compute TuneD profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-hpc-compute namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile for HPC compute workloads include=openshift-node,hpc-compute name: openshift-node-hpc-compute recommend: - match: - label: tuned.openshift.io/openshift-node-hpc-compute priority: 20 profile: openshift-node-hpc-compute In addition to the built-in hpc-compute profile, the example above includes the openshift-node TuneD daemon profile shipped within the default Tuned CR to use OpenShift-specific tuning for compute nodes. Overriding host-level sysctls Various kernel parameters can be changed at runtime by using /run/sysctl.d/ , /etc/sysctl.d/ , and /etc/sysctl.conf host configuration files. OpenShift Container Platform adds several host configuration files which set kernel parameters at runtime; for example, net.ipv[4-6]. , fs.inotify. , and vm.max_map_count . These runtime parameters provide basic functional tuning for the system prior to the kubelet and the Operator start. The Operator does not override these settings unless the reapply_sysctl option is set to false . Setting this option to false results in TuneD not applying the settings from the host configuration files after it applies its custom profile. Example: overriding host-level sysctls apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-no-reapply-sysctl namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.max_map_count=>524288 name: openshift-no-reapply-sysctl recommend: - match: - label: tuned.openshift.io/openshift-no-reapply-sysctl priority: 15 profile: openshift-no-reapply-sysctl operand: tunedConfig: reapply_sysctl: false 9.7. Deferring application of tuning changes As an administrator, use the Node Tuning Operator (NTO) to update custom resources (CRs) on a running system and make tuning changes. For example, they can update or add a sysctl parameter to the [sysctl] section of the tuned object. When administrators apply a tuning change, the NTO prompts TuneD to reprocess all configurations, causing the tuned process to roll back all tuning and then reapply it. Latency-sensitive applications may not tolerate the removal and reapplication of the tuned profile, as it can briefly disrupt performance. This is particularly critical for configurations that partition CPUs and manage process or interrupt affinity using the performance profile. To avoid this issue, OpenShift Container Platform introduced new methods for applying tuning changes. Before OpenShift Container Platform 4.17, the only available method, immediate, applied changes instantly, often triggering a tuned restart. The following additional methods are supported: always : Every change is applied at the node restart. update : When a tuning change modifies a tuned profile, it is applied immediately by default and takes effect as soon as possible. When a tuning change does not cause a tuned profile to change and its values are modified in place, it is treated as always. Enable this feature by adding the annotation tuned.openshift.io/deferred . The following table summarizes the possible values for the annotation: Annotation value Description missing The change is applied immediately. always The change is applied at the node restart. update The change is applied immediately if it causes a profile change, otherwise at the node restart. The following example demonstrates how to apply a change to the kernel.shmmni sysctl parameter by using the always method: Example apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: tuned.openshift.io/deferred: "always" spec: profile: - name: performance-patch data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-performance 1 [sysctl] kernel.shmmni=8192 2 recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: worker-cnf 3 priority: 19 profile: performance-patch 1 The include directive is used to inherit the openshift-node-performance-performance profile. This is a best practice to ensure that the profile is not missing any required settings. 2 The kernel.shmmni sysctl parameter is being changed to 8192 . 3 The machineConfigLabels field is used to target the worker-cnf role. Configure a MachineConfigPool resource to ensure the profile is applied only to the correct nodes. 9.7.1. Deferring application of tuning changes: An example The following worked example describes how to defer the application of tuning changes by using the Node Tuning Operator. Prerequisites You have cluster-admin role access. You have applied a performance profile to your cluster. A MachineConfigPool resource, for example, worker-cnf is configured to ensure that the profile is only applied to the designated nodes. Procedure Check what profiles are currently applied to your cluster by running the following command: USD oc -n openshift-cluster-node-tuning-operator get tuned Example output NAME AGE default 63m openshift-node-performance-performance 21m Check the machine config pools in your cluster by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-79a26af9f78ced61fa8ccd309d3c859c True False False 3 3 3 0 157m worker rendered-worker-d9352e91a1b14de7ef453fa54480ce0e True False False 2 2 2 0 157m worker-cnf rendered-worker-cnf-f398fc4fcb2b20104a51e744b8247272 True False False 1 1 1 0 92m Describe the current applied performance profile by running the following command: USD oc describe performanceprofile performance | grep Tuned Example output Tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-performance Verify the existing value of the kernel.shmmni sysctl parameter: Run the following command to display the node names: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-26-151.ec2.internal Ready worker,worker-cnf 116m v1.30.6 ip-10-0-46-60.ec2.internal Ready worker 115m v1.30.6 ip-10-0-52-141.ec2.internal Ready control-plane,master 123m v1.30.6 ip-10-0-6-97.ec2.internal Ready control-plane,master 121m v1.30.6 ip-10-0-86-145.ec2.internal Ready worker 117m v1.30.6 ip-10-0-92-228.ec2.internal Ready control-plane,master 123m v1.30.6 Run the following command to display the current value of the kernel.shmmni sysctl parameters on the node ip-10-0-32-74.ec2.internal : USD oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host sysctl kernel.shmmni Example output kernel.shmmni = 4096 Create a profile patch, for example, perf-patch.yaml that changes the kernel.shmmni sysctl parameter to 8192 . Defer the application of the change to a new manual restart by using the always method by applying the following configuration: apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: tuned.openshift.io/deferred: "always" spec: profile: - name: performance-patch data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-performance 1 [sysctl] kernel.shmmni=8192 2 recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: worker-cnf 3 priority: 19 profile: performance-patch 1 The include directive is used to inherit the openshift-node-performance-performance profile. This is a best practice to ensure that the profile is not missing any required settings. 2 The kernel.shmmni sysctl parameter is being changed to 8192 . 3 The machineConfigLabels field is used to target the worker-cnf role. Apply the profile patch by running the following command: USD oc apply -f perf-patch.yaml Run the following command to verify that the profile patch is waiting for the node restart: USD oc -n openshift-cluster-node-tuning-operator get profile Example output NAME TUNED APPLIED DEGRADED MESSAGE AGE ip-10-0-26-151.ec2.internal performance-patch False True The TuneD daemon profile is waiting for the node restart: performance-patch 126m ip-10-0-46-60.ec2.internal openshift-node True False TuneD profile applied. 125m ip-10-0-52-141.ec2.internal openshift-control-plane True False TuneD profile applied. 130m ip-10-0-6-97.ec2.internal openshift-control-plane True False TuneD profile applied. 130m ip-10-0-86-145.ec2.internal openshift-node True False TuneD profile applied. 126m ip-10-0-92-228.ec2.internal openshift-control-plane True False TuneD profile applied. 130m Confirm the value of the kernel.shmmni sysctl parameter remain unchanged before a restart: Run the following command to confirm that the application of the performance-patch change to the kernel.shmmni sysctl parameter on the node ip-10-0-26-151.ec2.internal is not applied: USD oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host sysctl kernel.shmmni Example output kernel.shmmni = 4096 Restart the node ip-10-0-26-151.ec2.internal to apply the required changes by running the following command: USD oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host reboot& In another terminal window, run the following command to verify that the node has restarted: USD watch oc get nodes Wait for the node ip-10-0-26-151.ec2.internal to transition back to the Ready state. Run the following command to verify that the profile patch is waiting for the node restart: USD oc -n openshift-cluster-node-tuning-operator get profile Example output NAME TUNED APPLIED DEGRADED MESSAGE AGE ip-10-0-20-251.ec2.internal performance-patch True False TuneD profile applied. 3h3m ip-10-0-30-148.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m ip-10-0-32-74.ec2.internal openshift-node True True TuneD profile applied. 179m ip-10-0-33-49.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m ip-10-0-84-72.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m ip-10-0-93-89.ec2.internal openshift-node True False TuneD profile applied. 179m Check that the value of the kernel.shmmni sysctl parameter have changed after the restart: Run the following command to verify that the kernel.shmmni sysctl parameter change has been applied on the node ip-10-0-32-74.ec2.internal : USD oc debug node/ip-10-0-32-74.ec2.internal -q -- chroot host sysctl kernel.shmmni Example output kernel.shmmni = 8192 Note An additional restart results in the restoration of the original value of the kernel.shmmni sysctl parameter. 9.8. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 9.9. Configuring node tuning in a hosted cluster To set node-level tuning on the nodes in your hosted cluster, you can use the Node Tuning Operator. In hosted control planes, you can configure node tuning by creating config maps that contain Tuned objects and referencing those config maps in your node pools. Procedure Create a config map that contains a valid tuned manifest, and reference the manifest in a node pool. In the following example, a Tuned manifest defines a profile that sets vm.dirty_ratio to 55 on nodes that contain the tuned-1-node-label node label with any value. Save the following ConfigMap manifest in a file named tuned-1.yaml : apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio="55" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile Note If you do not add any labels to an entry in the spec.recommend section of the Tuned spec, node-pool-based matching is assumed, so the highest priority profile in the spec.recommend section is applied to nodes in the pool. Although you can achieve more fine-grained node-label-based matching by setting a label value in the Tuned .spec.recommend.match section, node labels will not persist during an upgrade unless you set the .spec.management.upgradeType value of the node pool to InPlace . Create the ConfigMap object in the management cluster: USD oc --kubeconfig="USDMGMT_KUBECONFIG" create -f tuned-1.yaml Reference the ConfigMap object in the spec.tuningConfig field of the node pool, either by editing a node pool or creating one. In this example, assume that you have only one NodePool , named nodepool-1 , which contains 2 nodes. apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: ... name: nodepool-1 namespace: clusters ... spec: ... tuningConfig: - name: tuned-1 status: ... Note You can reference the same config map in multiple node pools. In hosted control planes, the Node Tuning Operator appends a hash of the node pool name and namespace to the name of the Tuned CRs to distinguish them. Outside of this case, do not create multiple TuneD profiles of the same name in different Tuned CRs for the same hosted cluster. Verification Now that you have created the ConfigMap object that contains a Tuned manifest and referenced it in a NodePool , the Node Tuning Operator syncs the Tuned objects into the hosted cluster. You can verify which Tuned objects are defined and which TuneD profiles are applied to each node. List the Tuned objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get tuned.tuned.openshift.io \ -n openshift-cluster-node-tuning-operator Example output NAME AGE default 7m36s rendered 7m36s tuned-1 65s List the Profile objects in the hosted cluster: USD oc --kubeconfig="USDHC_KUBECONFIG" get profile.tuned.openshift.io \ -n openshift-cluster-node-tuning-operator Example output NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s Note If no custom profiles are created, the openshift-node profile is applied by default. To confirm that the tuning was applied correctly, start a debug shell on a node and check the sysctl values: USD oc --kubeconfig="USDHC_KUBECONFIG" \ debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio Example output vm.dirty_ratio = 55 9.10. Advanced node tuning for hosted clusters by setting kernel boot parameters For more advanced tuning in hosted control planes, which requires setting kernel boot parameters, you can also use the Node Tuning Operator. The following example shows how you can create a node pool with huge pages reserved. Procedure Create a ConfigMap object that contains a Tuned object manifest for creating 10 huge pages that are 2 MB in size. Save this ConfigMap manifest in a file named tuned-hugepages.yaml : apiVersion: v1 kind: ConfigMap metadata: name: tuned-hugepages namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 name: openshift-node-hugepages recommend: - priority: 20 profile: openshift-node-hugepages Note The .spec.recommend.match field is intentionally left blank. In this case, this Tuned object is applied to all nodes in the node pool where this ConfigMap object is referenced. Group nodes with the same hardware configuration into the same node pool. Otherwise, TuneD operands can calculate conflicting kernel parameters for two or more nodes that share the same node pool. Create the ConfigMap object in the management cluster: USD oc --kubeconfig="<management_cluster_kubeconfig>" create -f tuned-hugepages.yaml 1 1 Replace <management_cluster_kubeconfig> with the name of your management cluster kubeconfig file. Create a NodePool manifest YAML file, customize the upgrade type of the NodePool , and reference the ConfigMap object that you created in the spec.tuningConfig section. Create the NodePool manifest and save it in a file named hugepages-nodepool.yaml by using the hcp CLI: USD hcp create nodepool aws \ --cluster-name <hosted_cluster_name> \ 1 --name <nodepool_name> \ 2 --node-count <nodepool_replicas> \ 3 --instance-type <instance_type> \ 4 --render > hugepages-nodepool.yaml 1 Replace <hosted_cluster_name> with the name of your hosted cluster. 2 Replace <nodepool_name> with the name of your node pool. 3 Replace <nodepool_replicas> with the number of your node pool replicas, for example, 2 . 4 Replace <instance_type> with the instance type, for example, m5.2xlarge . Note The --render flag in the hcp create command does not render the secrets. To render the secrets, you must use both the --render and the --render-sensitive flags in the hcp create command. In the hugepages-nodepool.yaml file, set .spec.management.upgradeType to InPlace , and set .spec.tuningConfig to reference the tuned-hugepages ConfigMap object that you created. apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: hugepages-nodepool namespace: clusters ... spec: management: ... upgradeType: InPlace ... tuningConfig: - name: tuned-hugepages Note To avoid the unnecessary re-creation of nodes when you apply the new MachineConfig objects, set .spec.management.upgradeType to InPlace . If you use the Replace upgrade type, nodes are fully deleted and new nodes can replace them when you apply the new kernel boot parameters that the TuneD operand calculated. Create the NodePool in the management cluster: USD oc --kubeconfig="<management_cluster_kubeconfig>" create -f hugepages-nodepool.yaml Verification After the nodes are available, the containerized TuneD daemon calculates the required kernel boot parameters based on the applied TuneD profile. After the nodes are ready and reboot once to apply the generated MachineConfig object, you can verify that the TuneD profile is applied and that the kernel boot parameters are set. List the Tuned objects in the hosted cluster: USD oc --kubeconfig="<hosted_cluster_kubeconfig>" get tuned.tuned.openshift.io \ -n openshift-cluster-node-tuning-operator Example output NAME AGE default 123m hugepages-8dfb1fed 1m23s rendered 123m List the Profile objects in the hosted cluster: USD oc --kubeconfig="<hosted_cluster_kubeconfig>" get profile.tuned.openshift.io \ -n openshift-cluster-node-tuning-operator Example output NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 openshift-node True False 132m nodepool-1-worker-2 openshift-node True False 131m hugepages-nodepool-worker-1 openshift-node-hugepages True False 4m8s hugepages-nodepool-worker-2 openshift-node-hugepages True False 3m57s Both of the worker nodes in the new NodePool have the openshift-node-hugepages profile applied. To confirm that the tuning was applied correctly, start a debug shell on a node and check /proc/cmdline . USD oc --kubeconfig="<hosted_cluster_kubeconfig>" \ debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline Example output BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-... hugepagesz=2M hugepages=50 Additional resources Hosted control planes overview
[ "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "oc get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE master-0 openshift-control-plane True False 6h33m master-1 openshift-control-plane True False 6h33m master-2 openshift-control-plane True False 6h33m worker-a openshift-node True False 6h28m worker-b openshift-node True False 6h28m", "oc get co/node-tuning -n openshift-cluster-node-tuning-operator", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE node-tuning 4.18.1 True False True 60m 1/5 Profiles with bootcmdline conflict", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: ingress namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=A custom OpenShift ingress profile include=openshift-control-plane [sysctl] net.ipv4.ip_local_port_range=\"1024 65535\" net.ipv4.tcp_tw_reuse=1 name: openshift-ingress recommend: - match: - label: tuned.openshift.io/ingress-node-label priority: 10 profile: openshift-ingress", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/ -name tuned.conf -printf '%h\\n' | sed 's|^.*/||'", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-hpc-compute namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile for HPC compute workloads include=openshift-node,hpc-compute name: openshift-node-hpc-compute recommend: - match: - label: tuned.openshift.io/openshift-node-hpc-compute priority: 20 profile: openshift-node-hpc-compute", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-no-reapply-sysctl namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.max_map_count=>524288 name: openshift-no-reapply-sysctl recommend: - match: - label: tuned.openshift.io/openshift-no-reapply-sysctl priority: 15 profile: openshift-no-reapply-sysctl operand: tunedConfig: reapply_sysctl: false", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: tuned.openshift.io/deferred: \"always\" spec: profile: - name: performance-patch data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-performance 1 [sysctl] kernel.shmmni=8192 2 recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: worker-cnf 3 priority: 19 profile: performance-patch", "oc -n openshift-cluster-node-tuning-operator get tuned", "NAME AGE default 63m openshift-node-performance-performance 21m", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-79a26af9f78ced61fa8ccd309d3c859c True False False 3 3 3 0 157m worker rendered-worker-d9352e91a1b14de7ef453fa54480ce0e True False False 2 2 2 0 157m worker-cnf rendered-worker-cnf-f398fc4fcb2b20104a51e744b8247272 True False False 1 1 1 0 92m", "oc describe performanceprofile performance | grep Tuned", "Tuned: openshift-cluster-node-tuning-operator/openshift-node-performance-performance", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-26-151.ec2.internal Ready worker,worker-cnf 116m v1.30.6 ip-10-0-46-60.ec2.internal Ready worker 115m v1.30.6 ip-10-0-52-141.ec2.internal Ready control-plane,master 123m v1.30.6 ip-10-0-6-97.ec2.internal Ready control-plane,master 121m v1.30.6 ip-10-0-86-145.ec2.internal Ready worker 117m v1.30.6 ip-10-0-92-228.ec2.internal Ready control-plane,master 123m v1.30.6", "oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host sysctl kernel.shmmni", "kernel.shmmni = 4096", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: performance-patch namespace: openshift-cluster-node-tuning-operator annotations: tuned.openshift.io/deferred: \"always\" spec: profile: - name: performance-patch data: | [main] summary=Configuration changes profile inherited from performance created tuned include=openshift-node-performance-performance 1 [sysctl] kernel.shmmni=8192 2 recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: worker-cnf 3 priority: 19 profile: performance-patch", "oc apply -f perf-patch.yaml", "oc -n openshift-cluster-node-tuning-operator get profile", "NAME TUNED APPLIED DEGRADED MESSAGE AGE ip-10-0-26-151.ec2.internal performance-patch False True The TuneD daemon profile is waiting for the next node restart: performance-patch 126m ip-10-0-46-60.ec2.internal openshift-node True False TuneD profile applied. 125m ip-10-0-52-141.ec2.internal openshift-control-plane True False TuneD profile applied. 130m ip-10-0-6-97.ec2.internal openshift-control-plane True False TuneD profile applied. 130m ip-10-0-86-145.ec2.internal openshift-node True False TuneD profile applied. 126m ip-10-0-92-228.ec2.internal openshift-control-plane True False TuneD profile applied. 130m", "oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host sysctl kernel.shmmni", "kernel.shmmni = 4096", "oc debug node/ip-10-0-26-151.ec2.internal -q -- chroot host reboot&", "watch oc get nodes", "oc -n openshift-cluster-node-tuning-operator get profile", "NAME TUNED APPLIED DEGRADED MESSAGE AGE ip-10-0-20-251.ec2.internal performance-patch True False TuneD profile applied. 3h3m ip-10-0-30-148.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m ip-10-0-32-74.ec2.internal openshift-node True True TuneD profile applied. 179m ip-10-0-33-49.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m ip-10-0-84-72.ec2.internal openshift-control-plane True False TuneD profile applied. 3h8m ip-10-0-93-89.ec2.internal openshift-node True False TuneD profile applied. 179m", "oc debug node/ip-10-0-32-74.ec2.internal -q -- chroot host sysctl kernel.shmmni", "kernel.shmmni = 8192", "apiVersion: v1 kind: ConfigMap metadata: name: tuned-1 namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: tuned-1 namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift profile include=openshift-node [sysctl] vm.dirty_ratio=\"55\" name: tuned-1-profile recommend: - priority: 20 profile: tuned-1-profile", "oc --kubeconfig=\"USDMGMT_KUBECONFIG\" create -f tuned-1.yaml", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: nodepool-1 namespace: clusters spec: tuningConfig: - name: tuned-1 status:", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME AGE default 7m36s rendered 7m36s tuned-1 65s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 tuned-1-profile True False 7m43s nodepool-1-worker-2 tuned-1-profile True False 7m14s", "oc --kubeconfig=\"USDHC_KUBECONFIG\" debug node/nodepool-1-worker-1 -- chroot /host sysctl vm.dirty_ratio", "vm.dirty_ratio = 55", "apiVersion: v1 kind: ConfigMap metadata: name: tuned-hugepages namespace: clusters data: tuning: | apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 name: openshift-node-hugepages recommend: - priority: 20 profile: openshift-node-hugepages", "oc --kubeconfig=\"<management_cluster_kubeconfig>\" create -f tuned-hugepages.yaml 1", "hcp create nodepool aws --cluster-name <hosted_cluster_name> \\ 1 --name <nodepool_name> \\ 2 --node-count <nodepool_replicas> \\ 3 --instance-type <instance_type> \\ 4 --render > hugepages-nodepool.yaml", "apiVersion: hypershift.openshift.io/v1alpha1 kind: NodePool metadata: name: hugepages-nodepool namespace: clusters spec: management: upgradeType: InPlace tuningConfig: - name: tuned-hugepages", "oc --kubeconfig=\"<management_cluster_kubeconfig>\" create -f hugepages-nodepool.yaml", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" get tuned.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME AGE default 123m hugepages-8dfb1fed 1m23s rendered 123m", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" get profile.tuned.openshift.io -n openshift-cluster-node-tuning-operator", "NAME TUNED APPLIED DEGRADED AGE nodepool-1-worker-1 openshift-node True False 132m nodepool-1-worker-2 openshift-node True False 131m hugepages-nodepool-worker-1 openshift-node-hugepages True False 4m8s hugepages-nodepool-worker-2 openshift-node-hugepages True False 3m57s", "oc --kubeconfig=\"<hosted_cluster_kubeconfig>\" debug node/nodepool-1-worker-1 -- chroot /host cat /proc/cmdline", "BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-... hugepagesz=2M hugepages=50" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/using-node-tuning-operator
Chapter 6. Postinstallation node tasks
Chapter 6. Postinstallation node tasks After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements through certain node tasks. 6.1. Adding RHEL compute machines to an OpenShift Container Platform cluster Understand and work with RHEL compute nodes. 6.1.1. About adding RHEL compute nodes to a cluster In OpenShift Container Platform 4.16, you have the option of using Red Hat Enterprise Linux (RHEL) machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the x86_64 architecture. You must use Red Hat Enterprise Linux CoreOS (RHCOS) machines for the control plane machines in your cluster. If you choose to use RHEL compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. For installer-provisioned infrastructure clusters, you must manually add RHEL compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. Important Because removing OpenShift Container Platform from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any RHEL machines that you add to the cluster. Swap memory is disabled on all RHEL machines that you add to your OpenShift Container Platform cluster. You cannot enable swap memory on these machines. 6.1.2. System requirements for RHEL compute nodes The Red Hat Enterprise Linux (RHEL) compute machine hosts in your OpenShift Container Platform environment must meet the following minimum hardware specifications and system-level requirements: You must have an active OpenShift Container Platform subscription on your Red Hat account. If you do not, contact your sales representative for more information. Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. Each system must meet the following hardware requirements: Physical or virtual system, or an instance running on a public or private IaaS. Base operating system: Use RHEL 8.8 or a later version with the minimal installation option. Important Adding RHEL 7 compute machines to an OpenShift Container Platform cluster is not supported. If you have RHEL 7 compute machines that were previously supported in a past OpenShift Container Platform version, you cannot upgrade them to RHEL 8. You must deploy new RHEL 8 hosts, and the old RHEL 7 hosts should be removed. See the "Deleting nodes" section for more information. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. If you deployed OpenShift Container Platform in FIPS mode, you must enable FIPS on the RHEL machine before you boot it. See Installing a RHEL 8 system with FIPS mode enabled in the RHEL 8 documentation. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. NetworkManager 1.0 or later. 1 vCPU. Minimum 8 GB RAM. Minimum 15 GB hard disk space for the file system containing /var/ . Minimum 1 GB hard disk space for the file system containing /usr/local/bin/ . Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its storage guidelines and the disk.enableUUID=TRUE attribute must be set. Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. For clusters installed on Microsoft Azure: Ensure the system includes the hardware requirement of a Standard_D8s_v3 virtual machine. Enable Accelerated Networking. Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide Microsoft Azure VMs with a more direct path to the switch. Additional resources Deleting nodes Accelerated Networking for Microsoft Azure VMs 6.1.2.1. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 6.1.3. Preparing the machine to run the playbook Before you can add compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an OpenShift Container Platform 4.16 cluster, you must prepare a RHEL 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. Prerequisites Install the OpenShift CLI ( oc ) on the machine that you run the playbook on. Log in as a user with cluster-admin permission. Procedure Ensure that the kubeconfig file for the cluster and the installation program that you used to install the cluster are on the RHEL 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster. Configure the machine to access all of the RHEL hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. Important If you use SSH key-based authentication, you must manage the key with an SSH agent. If you have not already done so, register the machine with RHSM and attach a pool with an OpenShift subscription to it: Register the machine with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.16: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.16-for-rhel-8-x86_64-rpms" Install the required packages, including openshift-ansible : # yum install openshift-ansible openshift-clients jq The openshift-ansible package provides installation program utilities and pulls in other packages that you require to add a RHEL compute node to your cluster, such as Ansible, playbooks, and related configuration files. The openshift-clients provides the oc CLI, and the jq package improves the display of JSON output on your command line. 6.1.4. Preparing a RHEL compute node Before you add a Red Hat Enterprise Linux (RHEL) machine to your OpenShift Container Platform cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active OpenShift Container Platform subscription, and enable the required repositories. On each host, register with RHSM: # subscription-manager register --username=<user_name> --password=<password> Pull the latest subscription data from RHSM: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach it: # subscription-manager attach --pool=<pool_id> Disable all yum repositories: Disable all the enabled RHSM repositories: # subscription-manager repos --disable="*" List the remaining yum repositories and note their names under repo id , if any: # yum repolist Use yum-config-manager to disable the remaining yum repositories: # yum-config-manager --disable <repo_id> Alternatively, disable all repositories: # yum-config-manager --disable \* Note that this might take a few minutes if you have a large number of available repositories Enable only the repositories required by OpenShift Container Platform 4.16: # subscription-manager repos \ --enable="rhel-8-for-x86_64-baseos-rpms" \ --enable="rhel-8-for-x86_64-appstream-rpms" \ --enable="rhocp-4.16-for-rhel-8-x86_64-rpms" \ --enable="fast-datapath-for-rhel-8-x86_64-rpms" Stop and disable firewalld on the host: # systemctl disable --now firewalld.service Note You must not enable firewalld later. If you do, you cannot access OpenShift Container Platform logs on the worker. 6.1.5. Adding a RHEL compute machine to your cluster You can add compute machines that use Red Hat Enterprise Linux as the operating system to an OpenShift Container Platform 4.16 cluster. Prerequisites You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. You prepared the RHEL hosts for installation. Procedure Perform the following steps on the machine that you prepared to run the playbook: Create an Ansible inventory file that is named /<path>/inventory/hosts that defines your compute machine hosts and required variables: 1 Specify the user name that runs the Ansible tasks on the remote compute machines. 2 If you do not specify root for the ansible_user , you must set ansible_become to True and assign the user sudo permissions. 3 Specify the path and file name of the kubeconfig file for your cluster. 4 List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. Navigate to the Ansible playbook directory: USD cd /usr/share/ansible/openshift-ansible Run the playbook: USD ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1 1 For <path> , specify the path to the Ansible inventory file that you created. 6.1.6. Required parameters for the Ansible hosts file You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. Parameter Description Values ansible_user The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. A user name on the system. The default value is root . ansible_become If the values of ansible_user is not root, you must set ansible_become to True , and the user that you specify as the ansible_user must be configured for passwordless sudo access. True . If the value is not True , do not specify and define this parameter. openshift_kubeconfig_path Specifies a path and file name to a local directory that contains the kubeconfig file for your cluster. The path and name of the configuration file. 6.1.7. Optional: Removing RHCOS compute machines from a cluster After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to free up resources. Prerequisites You have added RHEL compute machines to your cluster. Procedure View the list of machines and record the node names of the RHCOS compute machines: USD oc get nodes -o wide For each RHCOS compute machine, delete the node: Mark the node as unschedulable by running the oc adm cordon command: USD oc adm cordon <node_name> 1 1 Specify the node name of one of the RHCOS compute machines. Drain all the pods from the node: USD oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1 1 Specify the node name of the RHCOS compute machine that you isolated. Delete the node: USD oc delete nodes <node_name> 1 1 Specify the node name of the RHCOS compute machine that you drained. Review the list of compute machines to ensure that only the RHEL nodes remain: USD oc get nodes -o wide Remove the RHCOS machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the RHCOS compute machines. 6.2. Adding RHCOS compute machines to an OpenShift Container Platform cluster You can add more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines to your OpenShift Container Platform cluster on bare metal. Before you add more compute machines to a cluster that you installed on bare metal infrastructure, you must create RHCOS machines for it to use. You can either use an ISO image or network PXE booting to create the machines. 6.2.1. Prerequisites You installed a cluster on bare metal. You have installation media and Red Hat Enterprise Linux CoreOS (RHCOS) images that you used to create your cluster. If you do not have these files, you must obtain them by following the instructions in the installation procedure . 6.2.2. Creating RHCOS machines using an ISO image You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using an ISO image to create the machines. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. You must have the OpenShift CLI ( oc ) installed. Procedure Extract the Ignition config file from the cluster by running the following command: USD oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign Upload the worker.ign Ignition config file you exported from your cluster to your HTTP server. Note the URLs of these files. You can validate that the ignition files are available on the URLs. The following example gets the Ignition config files for the compute node: USD curl -k http://<HTTP_server>/worker.ign You can access the ISO image for booting your new machine by running to following command: RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location') Use the ISO file to install RHCOS on more compute machines. Use the same method that you used when you created machines before you installed the cluster: Burn the ISO image to a disk and boot it directly. Use ISO redirection with a LOM interface. Boot the RHCOS ISO image without specifying any options, or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note You can interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you must use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Ensure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. Continue to create more compute machines for your cluster. 6.2.3. Creating RHCOS machines by PXE or iPXE booting You can create more Red Hat Enterprise Linux CoreOS (RHCOS) compute machines for your bare metal cluster by using PXE or iPXE booting. Prerequisites Obtain the URL of the Ignition config file for the compute machines for your cluster. You uploaded this file to your HTTP server during installation. Obtain the URLs of the RHCOS ISO image, compressed metal BIOS, kernel , and initramfs files that you uploaded to your HTTP server during cluster installation. You have access to the PXE booting infrastructure that you used to create the machines for your OpenShift Container Platform cluster during installation. The machines must boot from their local disks after RHCOS is installed on them. If you use UEFI, you have access to the grub.conf file that you modified during OpenShift Container Platform installation. Procedure Confirm that your PXE or iPXE installation for the RHCOS images is correct. For PXE: 1 Specify the location of the live kernel file that you uploaded to your HTTP server. 2 Specify locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the live initramfs file, the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file, and the coreos.live.rootfs_url parameter value is the location of the live rootfs file. The coreos.inst.ignition_url and coreos.live.rootfs_url parameters only support HTTP and HTTPS. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? . For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and GRUB as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the worker Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Use the PXE or iPXE infrastructure to create the required compute machines for your cluster. 6.2.4. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 6.2.5. Adding a new RHCOS worker node with a custom /var partition in AWS OpenShift Container Platform supports partitioning devices during installation by using machine configs that are processed during the bootstrap. However, if you use /var partitioning, the device name must be determined at installation and cannot be changed. You cannot add different instance types as nodes if they have a different device naming schema. For example, if you configured the /var partition with the default AWS device name for m4.large instances, dev/xvdb , you cannot directly add an AWS m5.large instance, as m5.large instances use a /dev/nvme1n1 device by default. The device might fail to partition due to the different naming schema. The procedure in this section shows how to add a new Red Hat Enterprise Linux CoreOS (RHCOS) compute node with an instance that uses a different device name from what was configured at installation. You create a custom user data secret and configure a new compute machine set. These steps are specific to an AWS cluster. The principles apply to other cloud deployments also. However, the device naming schema is different for other deployments and should be determined on a per-case basis. Procedure On a command line, change to the openshift-machine-api namespace: USD oc project openshift-machine-api Create a new secret from the worker-user-data secret: Export the userData section of the secret to a text file: USD oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt Edit the text file to add the storage , filesystems , and systemd stanzas for the partitions you want to use for the new node. You can specify any Ignition configuration parameters as needed. Note Do not change the values in the ignition stanza. { "ignition": { "config": { "merge": [ { "source": "https:...." } ] }, "security": { "tls": { "certificateAuthorities": [ { "source": "data:text/plain;charset=utf-8;base64,.....==" } ] } }, "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/nvme1n1", 1 "partitions": [ { "label": "var", "sizeMiB": 50000, 2 "startMiB": 0 3 } ] } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var", 4 "format": "xfs", 5 "path": "/var" 6 } ] }, "systemd": { "units": [ 7 { "contents": "[Unit]\nBefore=local-fs.target\n[Mount]\nWhere=/var\nWhat=/dev/disk/by-partlabel/var\nOptions=defaults,pquota\n[Install]\nWantedBy=local-fs.target\n", "enabled": true, "name": "var.mount" } ] } } 1 Specifies an absolute path to the AWS block device. 2 Specifies the size of the data partition in Mebibytes. 3 Specifies the start of the partition in Mebibytes. When adding a data partition to the boot disk, a minimum value of 25000 MB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 4 Specifies an absolute path to the /var partition. 5 Specifies the filesystem format. 6 Specifies the mount-point of the filesystem while Ignition is running relative to where the root filesystem will be mounted. This is not necessarily the same as where it should be mounted in the real root, but it is encouraged to make it the same. 7 Defines a systemd mount unit that mounts the /dev/disk/by-partlabel/var device to the /var partition. Extract the disableTemplating section from the work-user-data secret to a text file: USD oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt Create the new user data secret file from the two text files. This user data secret passes the additional node partition information in the userData.txt file to the newly created node. USD oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt Create a new compute machine set for the new node: Create a new compute machine set YAML file, similar to the following, which is configured for AWS. Add the required partitions and the newly-created user data secret: Tip Use an existing compute machine set as a template and change the parameters as needed for the new node. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4 1 Specifies a name for the new node. 2 Specifies an absolute path to the AWS block device, here an encrypted EBS volume. 3 Optional. Specifies an additional EBS volume. 4 Specifies the user data secret file. Create the compute machine set: USD oc create -f <file-name>.yaml The machines might take a few moments to become available. Verify that the new partition and nodes are created: Verify that the compute machine set is created: USD oc get machineset Example output NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1 1 This is the new compute machine set. Verify that the new node is created: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.29.4 ip-10-0-146-113.ec2.internal Ready master 127m v1.29.4 ip-10-0-153-35.ec2.internal Ready worker 118m v1.29.4 ip-10-0-176-58.ec2.internal Ready master 126m v1.29.4 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.29.4 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.29.4 ip-10-0-245-59.ec2.internal Ready worker 116m v1.29.4 1 This is new new node. Verify that the custom /var partition is created on the new node: USD oc debug node/<node-name> -- chroot /host lsblk For example: USD oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk Example output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1 1 The nvme1n1 device is mounted to the /var partition. Additional resources For more information on how OpenShift Container Platform uses disk partitioning, see Disk partitioning . 6.3. Deploying machine health checks Understand and deploy machine health checks. Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' 6.3.1. About machine health checks Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. To monitor machine health, create a resource to define the configuration for a controller. Set a condition to check, such as staying in the NotReady status for five minutes or displaying a permanent condition in the node-problem-detector, and a label for the set of machines to monitor. The controller that observes a MachineHealthCheck resource checks for the defined condition. If a machine fails the health check, the machine is automatically deleted and one is created to take its place. When a machine is deleted, you see a machine deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time. If there are more unhealthy machines than the maxUnhealthy threshold allows for in the targeted pool of machines, remediation stops and therefore enables manual intervention. Note Consider the timeouts carefully, accounting for workloads and requirements. Long timeouts can result in long periods of downtime for the workload on the unhealthy machine. Too short timeouts can result in a remediation loop. For example, the timeout for checking the NotReady status must be long enough to allow the machine to complete the startup process. To stop the check, remove the resource. 6.3.1.1. Limitations when deploying machine health checks There are limitations to consider before deploying a machine health check: Only machines owned by a machine set are remediated by a machine health check. If the node for a machine is removed from the cluster, a machine health check considers the machine to be unhealthy and remediates it immediately. If the corresponding node for a machine does not join the cluster after the nodeStartupTimeout , the machine is remediated. A machine is remediated immediately if the Machine resource phase is Failed . Additional resources About control plane machine sets 6.3.2. Sample MachineHealthCheck resource The MachineHealthCheck resource for all cloud-based installation types, and other than bare metal, resembles the following YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: "Ready" timeout: "300s" 5 status: "False" - type: "Ready" timeout: "300s" 6 status: "Unknown" maxUnhealthy: "40%" 7 nodeStartupTimeout: "10m" 8 1 Specify the name of the machine health check to deploy. 2 3 Specify a label for the machine pool that you want to check. 4 Specify the machine set to track in <cluster_name>-<label>-<zone> format. For example, prod-node-us-east-1a . 5 6 Specify the timeout duration for a node condition. If a condition is met for the duration of the timeout, the machine will be remediated. Long timeouts can result in long periods of downtime for a workload on an unhealthy machine. 7 Specify the amount of machines allowed to be concurrently remediated in the targeted pool. This can be set as a percentage or an integer. If the number of unhealthy machines exceeds the limit set by maxUnhealthy , remediation is not performed. 8 Specify the timeout duration that a machine health check must wait for a node to join the cluster before a machine is determined to be unhealthy. Note The matchLabels are examples only; you must map your machine groups based on your specific needs. 6.3.2.1. Short-circuiting machine health check remediation Short-circuiting ensures that machine health checks remediate machines only when the cluster is healthy. Short-circuiting is configured through the maxUnhealthy field in the MachineHealthCheck resource. If the user defines a value for the maxUnhealthy field, before remediating any machines, the MachineHealthCheck compares the value of maxUnhealthy with the number of machines within its target pool that it has determined to be unhealthy. Remediation is not performed if the number of unhealthy machines exceeds the maxUnhealthy limit. Important If maxUnhealthy is not set, the value defaults to 100% and the machines are remediated regardless of the state of the cluster. The appropriate maxUnhealthy value depends on the scale of the cluster you deploy and how many machines the MachineHealthCheck covers. For example, you can use the maxUnhealthy value to cover multiple compute machine sets across multiple availability zones so that if you lose an entire zone, your maxUnhealthy setting prevents further remediation within the cluster. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Important If you configure a MachineHealthCheck resource for the control plane, set the value of maxUnhealthy to 1 . This configuration ensures that the machine health check takes no action when multiple control plane machines appear to be unhealthy. Multiple unhealthy control plane machines can indicate that the etcd cluster is degraded or that a scaling operation to replace a failed machine is in progress. If the etcd cluster is degraded, manual intervention might be required. If a scaling operation is in progress, the machine health check should allow it to finish. The maxUnhealthy field can be set as either an integer or percentage. There are different remediation implementations depending on the maxUnhealthy value. 6.3.2.1.1. Setting maxUnhealthy by using an absolute value If maxUnhealthy is set to 2 : Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy These values are independent of how many machines are being checked by the machine health check. 6.3.2.1.2. Setting maxUnhealthy by using percentages If maxUnhealthy is set to 40% and there are 25 machines being checked: Remediation will be performed if 10 or fewer nodes are unhealthy Remediation will not be performed if 11 or more nodes are unhealthy If maxUnhealthy is set to 40% and there are 6 machines being checked: Remediation will be performed if 2 or fewer nodes are unhealthy Remediation will not be performed if 3 or more nodes are unhealthy Note The allowed number of machines is rounded down when the percentage of maxUnhealthy machines that are checked is not a whole number. 6.3.3. Creating a machine health check resource You can create a MachineHealthCheck resource for machine sets in your cluster. Note You can only apply a machine health check to machines that are managed by compute machine sets or control plane machine sets. Prerequisites Install the oc command line interface. Procedure Create a healthcheck.yml file that contains the definition of your machine health check. Apply the healthcheck.yml file to your cluster: USD oc apply -f healthcheck.yml 6.3.4. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machines.machine.openshift.io -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api Or: USD oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines.machine.openshift.io 6.3.5. Understanding the difference between compute machine sets and the machine config pool MachineSet objects describe OpenShift Container Platform nodes with respect to the cloud or machine provider. The MachineConfigPool object allows MachineConfigController components to define and provide the status of machines in the context of upgrades. The MachineConfigPool object allows users to configure how upgrades are rolled out to the OpenShift Container Platform nodes in the machine config pool. The NodeSelector object can be replaced with a reference to the MachineSet object. 6.4. Recommended node host practices The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 6.4.1. Creating a KubeletConfig CRD to edit kubelet parameters The kubelet configuration is currently serialized as an Ignition configuration, so it can be directly edited. However, there is also a new kubelet-config-controller added to the Machine Config Controller (MCC). This lets you use a KubeletConfig custom resource (CR) to edit the kubelet parameters. Note As the fields in the kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the kubelet validates those values directly. Invalid values in the kubeletConfig object might cause cluster nodes to become unavailable. For valid values, see the Kubernetes documentation . Consider the following guidance: Edit an existing KubeletConfig CR to modify existing settings or add new settings, instead of creating a CR for each change. It is recommended that you create a CR only to modify a different machine config pool, or for changes that are intended to be temporary, so that you can revert the changes. Create one KubeletConfig CR for each machine config pool with all the config changes you want for that pool. As needed, create multiple KubeletConfig CRs with a limit of 10 per cluster. For the first KubeletConfig CR, the Machine Config Operator (MCO) creates a machine config appended with kubelet . With each subsequent CR, the controller creates another kubelet machine config with a numeric suffix. For example, if you have a kubelet machine config with a -2 suffix, the kubelet machine config is appended with -3 . Note If you are applying a kubelet or container runtime config to a custom machine config pool, the custom role in the machineConfigSelector must match the name of the custom machine config pool. For example, because the following custom machine config pool is named infra , the custom role must also be infra : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]} # ... If you want to delete the machine configs, delete them in reverse order to avoid exceeding the limit. For example, you delete the kubelet-3 machine config before deleting the kubelet-2 machine config. Note If you have a machine config with a kubelet-9 suffix, and you create another KubeletConfig CR, a new machine config is not created, even if there are fewer than 10 kubelet machine configs. Example KubeletConfig CR USD oc get kubeletconfig NAME AGE set-kubelet-config 15m Example showing a KubeletConfig machine config USD oc get mc | grep kubelet ... 99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m ... The following procedure is an example to show how to configure the maximum number of pods per node, the maximum PIDs per node, and the maximum container log size size on the worker nodes. Prerequisites Obtain the label associated with the static MachineConfigPool CR for the type of node you want to configure. Perform one of the following steps: View the machine config pool: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1 1 If a label has been added it appears under labels . If the label is not present, add a key/value pair: USD oc label machineconfigpool worker custom-kubelet=set-kubelet-config Procedure View the available machine configuration objects that you can select: USD oc get machineconfig By default, the two kubelet-related configs are 01-master-kubelet and 01-worker-kubelet . Check the current value for the maximum pods per node: USD oc describe node <node_name> For example: USD oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94 Look for value: pods: <value> in the Allocatable stanza: Example output Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250 Configure the worker nodes as needed: Create a YAML file similar to the following that contains the kubelet configuration: Important Kubelet configurations that target a specific machine config pool also affect any dependent pools. For example, creating a kubelet configuration for the pool containing worker nodes will also apply to any subset pools, including the pool containing infrastructure nodes. To avoid this, you must create a new machine config pool with a selection expression that only includes worker nodes, and have your kubelet configuration target this new pool. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500 1 Enter the label from the machine config pool. 2 Add the kubelet configuration. For example: Use podPidsLimit to set the maximum number of PIDs in any pod. Use containerLogMaxSize to set the maximum size of the container log file before it is rotated. Use maxPods to set the maximum pods per node. Note The rate at which the kubelet talks to the API server depends on queries per second (QPS) and burst values. The default values, 50 for kubeAPIQPS and 100 for kubeAPIBurst , are sufficient if there are limited pods running on each node. It is recommended to update the kubelet QPS and burst rates if there are enough CPU and memory resources on the node. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS> Update the machine config pool for workers with the label: USD oc label machineconfigpool worker custom-kubelet=set-kubelet-config Create the KubeletConfig object: USD oc create -f change-maxPods-cr.yaml Verification Verify that the KubeletConfig object is created: USD oc get kubeletconfig Example output NAME AGE set-kubelet-config 15m Depending on the number of worker nodes in the cluster, wait for the worker nodes to be rebooted one by one. For a cluster with 3 worker nodes, this could take about 10 to 15 minutes. Verify that the changes are applied to the node: Check on a worker node that the maxPods value changed: USD oc describe node <node_name> Locate the Allocatable stanza: ... Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1 ... 1 In this example, the pods parameter should report the value you set in the KubeletConfig object. Verify the change in the KubeletConfig object: USD oc get kubeletconfigs set-kubelet-config -o yaml This should show a status of True and type:Success , as shown in the following example: spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: "2021-06-30T17:04:07Z" message: Success status: "True" type: Success 6.4.2. Modifying the number of unavailable worker nodes By default, only one machine is allowed to be unavailable when applying the kubelet-related configuration to the available worker nodes. For a large cluster, it can take a long time for the configuration change to be reflected. At any time, you can adjust the number of machines that are updating to speed up the process. Procedure Edit the worker machine config pool: USD oc edit machineconfigpool worker Add the maxUnavailable field and set the value: spec: maxUnavailable: <node_count> Important When setting the value, consider the number of worker nodes that can be unavailable without affecting the applications running on the cluster. 6.4.3. Control plane node sizing The control plane node resource requirements depend on the number and type of nodes and objects in the cluster. The following control plane node size recommendations are based on the results of a control plane density focused testing, or Cluster-density . This test creates the following objects across a given number of namespaces: 1 image stream 1 build 5 deployments, with 2 pod replicas in a sleep state, mounting 4 secrets, 4 config maps, and 1 downward API volume each 5 services, each one pointing to the TCP/8080 and TCP/8443 ports of one of the deployments 1 route pointing to the first of the services 10 secrets containing 2048 random string characters 10 config maps containing 2048 random string characters Number of worker nodes Cluster-density (namespaces) CPU cores Memory (GB) 24 500 4 16 120 1000 8 32 252 4000 16, but 24 if using the OVN-Kubernetes network plug-in 64, but 128 if using the OVN-Kubernetes network plug-in 501, but untested with the OVN-Kubernetes network plug-in 4000 16 96 The data from the table above is based on an OpenShift Container Platform running on top of AWS, using r5.4xlarge instances as control-plane nodes and m5.2xlarge instances as worker nodes. On a large and dense cluster with three control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted, or fails. The failures can be due to unexpected issues with power, network, underlying infrastructure, or intentional cases where the cluster is restarted after shutting it down to save costs. The remaining two control plane nodes must handle the load in order to be highly available, which leads to increase in the resource usage. This is also expected during upgrades because the control plane nodes are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures, keep the overall CPU and memory resource usage on the control plane nodes to at most 60% of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the control plane nodes accordingly to avoid potential downtime due to lack of resources. Important The node sizing varies depending on the number of nodes and object counts in the cluster. It also depends on whether the objects are actively being created on the cluster. During object creation, the control plane is more active in terms of resource usage compared to when the objects are in the running phase. Operator Lifecycle Manager (OLM ) runs on the control plane nodes and its memory footprint depends on the number of namespaces and user installed operators that OLM needs to manage on the cluster. Control plane nodes need to be sized accordingly to avoid OOM kills. Following data points are based on the results from cluster maximums testing. Number of namespaces OLM memory at idle state (GB) OLM memory with 5 user operators installed (GB) 500 0.823 1.7 1000 1.2 2.5 1500 1.7 3.2 2000 2 4.4 3000 2.7 5.6 4000 3.8 7.6 5000 4.2 9.02 6000 5.8 11.3 7000 6.6 12.9 8000 6.9 14.8 9000 8 17.7 10,000 9.9 21.6 Important You can modify the control plane node size in a running OpenShift Container Platform 4.16 cluster for the following configurations only: Clusters installed with a user-provisioned installation method. AWS clusters installed with an installer-provisioned infrastructure installation method. Clusters that use a control plane machine set to manage control plane machines. For all other configurations, you must estimate your total node count and use the suggested control plane node size during installation. Important The recommendations are based on the data points captured on OpenShift Container Platform clusters with OpenShift SDN as the network plugin. Note In OpenShift Container Platform 4.16, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and versions. The sizes are determined taking that into consideration. 6.4.4. Setting up CPU Manager To configure CPU manager, create a KubeletConfig custom resource (CR) and apply it to the desired set of nodes. Procedure Label a node by running the following command: # oc label node perf-node.example.com cpumanager=true To enable CPU Manager for all compute nodes, edit the CR by running the following command: # oc edit machineconfigpool worker Add the custom-kubelet: cpumanager-enabled label to metadata.labels section. metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. This is the default policy. static . This policy allows containers in guaranteed pods with integer CPU requests. It also limits access to exclusive CPUs on the node. If static , you must use a lowercase s . 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config by running the following command: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config by running the following command: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the compute node for the updated kubelet.conf file by running the following command: # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 cpuManagerPolicy is defined when you create the KubeletConfig CR. 2 cpuManagerReconcilePeriod is defined when you create the KubeletConfig CR. Create a project by running the following command: USD oc new-project <project_name> Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verification Verify that the pod is scheduled to the node that you labeled by running the following command: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that a CPU has been exclusively assigned to the pod by running the following command: # oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2 Example output NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process by running the following commands: # oc debug node/perf-node.example.com sh-4.2# systemctl status | grep -B5 pause Note If the output returns multiple pause process entries, you must identify the correct pause process. Example output # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Verify that pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice subdirectory by running the following commands: # cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus cgroup.procs` ; do echo -n "USDi "; cat USDi ; done Note Pods of other QoS tiers end up in child cgroups of the parent kubepods . Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task by running the following command: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod on the system cannot run on the core allocated for the Guaranteed pod. For example, to verify the pod in the besteffort QoS tier, run the following commands: # cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s 6.5. Huge pages Understand and configure huge pages. 6.5.1. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. 6.5.2. How huge pages are consumed by apps Nodes must pre-allocate huge pages in order for the node to report its huge page capacity. A node can only pre-allocate huge pages for a single size. Huge pages can be consumed through container-level resource requirements using the resource name hugepages-<size> , where size is the most compact binary notation using integer values supported on a particular node. For example, if a node supports 2048KiB page sizes, it exposes a schedulable resource hugepages-2Mi . Unlike CPU or memory, huge pages do not support over-commitment. apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: "1Gi" cpu: "1" volumes: - name: hugepage emptyDir: medium: HugePages 1 Specify the amount of memory for hugepages as the exact amount to be allocated. Do not specify this value as the amount of memory for hugepages multiplied by the size of the page. For example, given a huge page size of 2MB, if you want to use 100MB of huge-page-backed RAM for your application, then you would allocate 50 huge pages. OpenShift Container Platform handles the math for you. As in the above example, you can specify 100MB directly. Allocating huge pages of a specific size Some platforms support multiple huge page sizes. To allocate huge pages of a specific size, precede the huge pages boot command parameters with a huge page size selection parameter hugepagesz=<size> . The <size> value must be specified in bytes with an optional scale suffix [ kKmMgG ]. The default huge page size can be defined with the default_hugepagesz=<size> boot parameter. Huge page requirements Huge page requests must equal the limits. This is the default if limits are specified, but requests are not. Huge pages are isolated at a pod scope. Container isolation is planned in a future iteration. EmptyDir volumes backed by huge pages must not consume more huge page memory than the pod request. Applications that consume huge pages via shmget() with SHM_HUGETLB must run with a supplemental group that matches proc/sys/vm/hugetlb_shm_group . 6.5.3. Configuring huge pages at boot time Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. There are two ways of reserving huge pages: at boot time and at run time. Reserving at boot time increases the possibility of success because the memory has not yet been significantly fragmented. The Node Tuning Operator currently supports boot time allocation of huge pages on specific nodes. Procedure To minimize node reboots, the order of the steps below needs to be followed: Label all nodes that need the same huge pages setting by a label. USD oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp= Create a file with the following content and name it hugepages-tuned-boottime.yaml : apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: "worker-hp" priority: 30 profile: openshift-node-hugepages 1 Set the name of the Tuned resource to hugepages . 2 Set the profile section to allocate huge pages. 3 Note the order of parameters is important as some platforms support huge pages of various sizes. 4 Enable machine config pool based matching. Create the Tuned hugepages object USD oc create -f hugepages-tuned-boottime.yaml Create a file with the following content and name it hugepages-mcp.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: "" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: "" Create the machine config pool: USD oc create -f hugepages-mcp.yaml Given enough non-fragmented memory, all the nodes in the worker-hp machine config pool should now have 50 2Mi huge pages allocated. USD oc get node <node_using_hugepages> -o jsonpath="{.status.allocatable.hugepages-2Mi}" 100Mi Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. 6.6. Understanding device plugins The device plugin provides a consistent and portable solution to consume hardware devices across clusters. The device plugin provides support for these devices through an extension mechanism, which makes these devices available to Containers, provides health checks of these devices, and securely shares them. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. A device plugin is a gRPC service running on the nodes (external to the kubelet ) that is responsible for managing specific hardware resources. Any device plugin must support following remote procedure calls (RPCs): service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} } Example device plugins Nvidia GPU device plugin for COS-based operating system Nvidia official GPU device plugin Solarflare device plugin KubeVirt device plugins: vfio and kvm Kubernetes device plugin for IBM(R) Crypto Express (CEX) cards Note For easy device plugin reference implementation, there is a stub device plugin in the Device Manager code: vendor/k8s.io/kubernetes/pkg/kubelet/cm/deviceplugin/device_plugin_stub.go . 6.6.1. Methods for deploying a device plugin Daemon sets are the recommended approach for device plugin deployments. Upon start, the device plugin will try to create a UNIX domain socket at /var/lib/kubelet/device-plugin/ on the node to serve RPCs from Device Manager. Since device plugins must manage hardware resources, access to the host file system, as well as socket creation, they must be run in a privileged security context. More specific details regarding deployment steps can be found with each device plugin implementation. 6.6.2. Understanding the Device Manager Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. You can advertise specialized hardware without requiring any upstream code changes. Important OpenShift Container Platform supports the device plugin API, but the device plugin Containers are supported by individual vendors. Device Manager advertises devices as Extended Resources . User pods can consume devices, advertised by Device Manager, using the same Limit/Request mechanism, which is used for requesting any other Extended Resource . Upon start, the device plugin registers itself with Device Manager invoking Register on the /var/lib/kubelet/device-plugins/kubelet.sock and starts a gRPC service at /var/lib/kubelet/device-plugins/<plugin>.sock for serving Device Manager requests. Device Manager, while processing a new registration request, invokes ListAndWatch remote procedure call (RPC) at the device plugin service. In response, Device Manager gets a list of Device objects from the plugin over a gRPC stream. Device Manager will keep watching on the stream for new updates from the plugin. On the plugin side, the plugin will also keep the stream open and whenever there is a change in the state of any of the devices, a new device list is sent to the Device Manager over the same streaming connection. While handling a new pod admission request, Kubelet passes requested Extended Resources to the Device Manager for device allocation. Device Manager checks in its database to verify if a corresponding plugin exists or not. If the plugin exists and there are free allocatable devices as well as per local cache, Allocate RPC is invoked at that particular device plugin. Additionally, device plugins can also perform several other device-specific operations, such as driver installation, device initialization, and device resets. These functionalities vary from implementation to implementation. 6.6.3. Enabling Device Manager Enable Device Manager to implement a device plugin to advertise specialized hardware without any upstream code changes. Device Manager provides a mechanism for advertising specialized node hardware resources with the help of plugins known as device plugins. Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command. Perform one of the following steps: View the machine config: # oc describe machineconfig <name> For example: # oc describe machineconfig 00-worker Example output Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1 1 Label required for the Device Manager. Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a Device Manager CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3 1 Assign a name to CR. 2 Enter the label from the Machine Config Pool. 3 Set DevicePlugins to 'true`. Create the Device Manager: USD oc create -f devicemgr.yaml Example output kubeletconfig.machineconfiguration.openshift.io/devicemgr created Ensure that Device Manager was actually enabled by confirming that /var/lib/kubelet/device-plugins/kubelet.sock is created on the node. This is the UNIX domain socket on which the Device Manager gRPC server listens for new plugin registrations. This sock file is created when the Kubelet is started only if Device Manager is enabled. 6.7. Taints and tolerations Understand and work with taints and tolerations. 6.7.1. Understanding taints and tolerations A taint allows a node to refuse a pod to be scheduled unless that pod has a matching toleration . You apply taints to a node through the Node specification ( NodeSpec ) and apply tolerations to a pod through the Pod specification ( PodSpec ). When you apply a taint a node, the scheduler cannot place a pod on that node unless the pod can tolerate the taint. Example taint in a node specification apiVersion: v1 kind: Node metadata: name: my-node #... spec: taints: - effect: NoExecute key: key1 value: value1 #... Example toleration in a Pod spec apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Equal" value: "value1" effect: "NoExecute" tolerationSeconds: 3600 #... Taints and tolerations consist of a key, value, and effect. Table 6.1. Taint and toleration components Parameter Description key The key is any string, up to 253 characters. The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. value The value is any string, up to 63 characters. The value must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores. effect The effect is one of the following: NoSchedule [1] New pods that do not match the taint are not scheduled onto that node. Existing pods on the node remain. PreferNoSchedule New pods that do not match the taint might be scheduled onto that node, but the scheduler tries not to. Existing pods on the node remain. NoExecute New pods that do not match the taint cannot be scheduled onto that node. Existing pods on the node that do not have a matching toleration are removed. operator Equal The key / value / effect parameters must match. This is the default. Exists The key / effect parameters must match. You must leave a blank value parameter, which matches any. If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... A toleration matches a taint: If the operator parameter is set to Equal : the key parameters are the same; the value parameters are the same; the effect parameters are the same. If the operator parameter is set to Exists : the key parameters are the same; the effect parameters are the same. The following taints are built into OpenShift Container Platform: node.kubernetes.io/not-ready : The node is not ready. This corresponds to the node condition Ready=False . node.kubernetes.io/unreachable : The node is unreachable from the node controller. This corresponds to the node condition Ready=Unknown . node.kubernetes.io/memory-pressure : The node has memory pressure issues. This corresponds to the node condition MemoryPressure=True . node.kubernetes.io/disk-pressure : The node has disk pressure issues. This corresponds to the node condition DiskPressure=True . node.kubernetes.io/network-unavailable : The node network is unavailable. node.kubernetes.io/unschedulable : The node is unschedulable. node.cloudprovider.kubernetes.io/uninitialized : When the node controller is started with an external cloud provider, this taint is set on a node to mark it as unusable. After a controller from the cloud-controller-manager initializes this node, the kubelet removes this taint. node.kubernetes.io/pid-pressure : The node has pid pressure. This corresponds to the node condition PIDPressure=True . Important OpenShift Container Platform does not set a default pid.available evictionHard . 6.7.2. Adding taints and tolerations You add tolerations to pods and taints to nodes to allow the node to control which pods should or should not be scheduled on them. For existing pods and nodes, you should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with an Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod can remain bound to a node before being evicted. For example: Sample pod configuration file with an Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" 1 effect: "NoExecute" tolerationSeconds: 3600 #... 1 The Exists operator does not take a value . This example places a taint on node1 that has key key1 , value value1 , and taint effect NoExecute . Add a taint to a node by using the following command with the parameters described in the Taint and toleration components table: USD oc adm taint nodes <node_name> <key>=<value>:<effect> For example: USD oc adm taint nodes node1 key1=value1:NoExecute This command places a taint on node1 that has key key1 , value value1 , and effect NoExecute . Note If you add a NoSchedule taint to a control plane node, the node must have the node-role.kubernetes.io/master=:NoSchedule taint, which is added by default. For example: apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node #... spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #... The tolerations on the pod match the taint on the node. A pod with either toleration can be scheduled onto node1 . 6.7.3. Adding taints and tolerations using a compute machine set You can add taints to nodes using a compute machine set. All nodes associated with the MachineSet object are updated with the taint. Tolerations respond to taints added by a compute machine set in the same manner as taints added directly to the nodes. Procedure Add a toleration to a pod by editing the Pod spec to include a tolerations stanza: Sample pod configuration file with Equal operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" 1 value: "value1" operator: "Equal" effect: "NoExecute" tolerationSeconds: 3600 2 #... 1 The toleration parameters, as described in the Taint and toleration components table. 2 The tolerationSeconds parameter specifies how long a pod is bound to a node before being evicted. For example: Sample pod configuration file with Exists operator apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key1" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... Add the taint to the MachineSet object: Edit the MachineSet YAML for the nodes you want to taint or you can create a new MachineSet object: USD oc edit machineset <machineset> Add the taint to the spec.template.spec section: Example taint in a compute machine set specification apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset #... spec: #... template: #... spec: taints: - effect: NoExecute key: key1 value: value1 #... This example places a taint that has the key key1 , value value1 , and taint effect NoExecute on the nodes. Scale down the compute machine set to 0: USD oc scale --replicas=0 machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0 Wait for the machines to be removed. Scale up the compute machine set as needed: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Wait for the machines to start. The taint is added to the nodes associated with the MachineSet object. 6.7.4. Binding a user to a node using taints and tolerations If you want to dedicate a set of nodes for exclusive use by a particular set of users, add a toleration to their pods. Then, add a corresponding taint to those nodes. The pods with the tolerations are allowed to use the tainted nodes or any other nodes in the cluster. If you want ensure the pods are scheduled to only those tainted nodes, also add a label to the same set of nodes and add a node affinity to the pods so that the pods can only be scheduled onto nodes with that label. Procedure To configure a node so that users can use only that node: Add a corresponding taint to those nodes: For example: USD oc adm taint nodes node1 dedicated=groupName:NoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my-node #... spec: taints: - key: dedicated value: groupName effect: NoSchedule #... Add a toleration to the pods by writing a custom admission controller. 6.7.5. Controlling nodes with special hardware using taints and tolerations In a cluster where a small subset of nodes have specialized hardware, you can use taints and tolerations to keep pods that do not need the specialized hardware off of those nodes, leaving the nodes for pods that do need the specialized hardware. You can also require pods that need specialized hardware to use specific nodes. You can achieve this by adding a toleration to pods that need the special hardware and tainting the nodes that have the specialized hardware. Procedure To ensure nodes with specialized hardware are reserved for specific pods: Add a toleration to pods that need the special hardware. For example: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "disktype" value: "ssd" operator: "Equal" effect: "NoSchedule" tolerationSeconds: 3600 #... Taint the nodes that have the specialized hardware using one of the following commands: USD oc adm taint nodes <node-name> disktype=ssd:NoSchedule Or: USD oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule Tip You can alternatively apply the following YAML to add the taint: kind: Node apiVersion: v1 metadata: name: my_node #... spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #... 6.7.6. Removing taints and tolerations You can remove taints from nodes and tolerations from pods as needed. You should add the toleration to the pod first, then add the taint to the node to avoid pods being removed from the node before you can add the toleration. Procedure To remove taints and tolerations: To remove a taint from a node: USD oc adm taint nodes <node-name> <key>- For example: USD oc adm taint nodes ip-10-0-132-248.ec2.internal key1- Example output node/ip-10-0-132-248.ec2.internal untainted To remove a toleration from a pod, edit the Pod spec to remove the toleration: apiVersion: v1 kind: Pod metadata: name: my-pod #... spec: tolerations: - key: "key2" operator: "Exists" effect: "NoExecute" tolerationSeconds: 3600 #... 6.8. Topology Manager Understand and work with Topology Manager. 6.8.1. Topology Manager policies Topology Manager aligns Pod resources of all Quality of Service (QoS) classes by collecting topology hints from Hint Providers, such as CPU Manager and Device Manager, and using the collected hints to align the Pod resources. Topology Manager supports four allocation policies, which you assign in the KubeletConfig custom resource (CR) named cpumanager-enabled : none policy This is the default policy and does not perform any topology alignment. best-effort policy For each container in a pod with the best-effort topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager stores this and admits the pod to the node. restricted policy For each container in a pod with the restricted topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager stores the preferred NUMA Node affinity for that container. If the affinity is not preferred, Topology Manager rejects this pod from the node, resulting in a pod in a Terminated state with a pod admission failure. single-numa-node policy For each container in a pod with the single-numa-node topology management policy, kubelet calls each Hint Provider to discover their resource availability. Using this information, the Topology Manager determines if a single NUMA Node affinity is possible. If it is, the pod is admitted to the node. If a single NUMA Node affinity is not possible, the Topology Manager rejects the pod from the node. This results in a pod in a Terminated state with a pod admission failure. 6.8.2. Setting up Topology Manager To use Topology Manager, you must configure an allocation policy in the KubeletConfig custom resource (CR) named cpumanager-enabled . This file might exist if you have set up CPU Manager. If the file does not exist, you can create the file. Prerequisites Configure the CPU Manager policy to be static . Procedure To activate Topology Manager: Configure the Topology Manager allocation policy in the custom resource. USD oc edit KubeletConfig cpumanager-enabled apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2 1 This parameter must be static with a lowercase s . 2 Specify your selected Topology Manager allocation policy. Here, the policy is single-numa-node . Acceptable values are: default , best-effort , restricted , single-numa-node . 6.8.3. Pod interactions with Topology Manager policies The example Pod specs below help illustrate pod interactions with Topology Manager. The following pod runs in the BestEffort QoS class because no resource requests or limits are specified. spec: containers: - name: nginx image: nginx The pod runs in the Burstable QoS class because requests are less than limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" requests: memory: "100Mi" If the selected policy is anything other than none , Topology Manager would not consider either of these Pod specifications. The last example pod below runs in the Guaranteed QoS class because requests are equal to limits. spec: containers: - name: nginx image: nginx resources: limits: memory: "200Mi" cpu: "2" example.com/device: "1" requests: memory: "200Mi" cpu: "2" example.com/device: "1" Topology Manager would consider this pod. The Topology Manager would consult the hint providers, which are CPU Manager and Device Manager, to get topology hints for the pod. Topology Manager will use this information to store the best topology for this container. In the case of this pod, CPU Manager and Device Manager will use this stored information at the resource allocation stage. 6.9. Resource requests and overcommitment For each compute resource, a container may specify a resource request and limit. Scheduling decisions are made based on the request to ensure that a node has enough capacity available to meet the requested value. If a container specifies limits, but omits requests, the requests are defaulted to the limits. A container is not able to exceed the specified limit on the node. The enforcement of limits is dependent upon the compute resource type. If a container makes no request or limit, the container is scheduled to a node with no resource guarantees. In practice, the container is able to consume as much of the specified resource as is available with the lowest local priority. In low resource situations, containers that specify no resource requests are given the lowest quality of service. Scheduling is based on resources requested, while quota and hard limits refer to resource limits, which can be set higher than requested resources. The difference between request and limit determines the level of overcommit; for instance, if a container is given a memory request of 1Gi and a memory limit of 2Gi, it is scheduled based on the 1Gi request being available on the node, but could use up to 2Gi; so it is 100% overcommitted. 6.10. Cluster-level overcommit using the Cluster Resource Override Operator The Cluster Resource Override Operator is an admission webhook that allows you to control the level of overcommit and manage container density across all the nodes in your cluster. The Operator controls how nodes in specific projects can exceed defined memory and CPU limits. The Operator modifies the ratio between the requests and limits that are set on developer containers. In conjunction with a per-project limit range that specifies limits and defaults, you can achieve the desired level of overcommit. You must install the Cluster Resource Override Operator by using the OpenShift Container Platform console or CLI as shown in the following sections. After you deploy the Cluster Resource Override Operator, the Operator modifies all new pods in specific namespaces. The Operator does not edit pods that existed before you deployed the Operator. During the installation, you create a ClusterResourceOverride custom resource (CR), where you set the level of overcommit, as shown in the following example: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. If a container memory limit has been specified or defaulted, the memory request is overridden to this percentage of the limit, between 1-100. The default is 50. 3 Optional. If a container CPU limit has been specified or defaulted, the CPU request is overridden to this percentage of the limit, between 1-100. The default is 25. 4 Optional. If a container memory limit has been specified or defaulted, the CPU limit is overridden to a percentage of the memory limit, if specified. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request (if configured). The default is 200. Note The Cluster Resource Override Operator overrides have no effect if limits have not been set on containers. Create a LimitRange object with default limits per individual project or configure limits in Pod specs for the overrides to apply. When configured, you can enable overrides on a per-project basis by applying the following label to the Namespace object for each project where you want the overrides to apply. For example, you can configure override so that infrastructure components are not subject to the overrides. apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" # ... The Operator watches for the ClusterResourceOverride CR and ensures that the ClusterResourceOverride admission webhook is installed into the same namespace as the operator. For example, a pod has the following resources limits: apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: "512Mi" cpu: "2000m" # ... The Cluster Resource Override Operator intercepts the original pod request, then overrides the resources according to the configuration set in the ClusterResourceOverride object. apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace # ... spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: "1" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi # ... 1 The CPU limit has been overridden to 1 because the limitCPUToMemoryPercent parameter is set to 200 in the ClusterResourceOverride object. As such, 200% of the memory limit, 512Mi in CPU terms, is 1 CPU core. 2 The CPU request is now 250m because the cpuRequestToLimit is set to 25 in the ClusterResourceOverride object. As such, 25% of the 1 CPU core is 250m. 6.10.1. Installing the Cluster Resource Override Operator using the web console You can use the OpenShift Container Platform web console to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, navigate to Home Projects Click Create Project . Specify clusterresourceoverride-operator as the name of the project. Click Create . Navigate to Operators OperatorHub . Choose ClusterResourceOverride Operator from the list of available Operators and click Install . On the Install Operator page, make sure A specific Namespace on the cluster is selected for Installation Mode . Make sure clusterresourceoverride-operator is selected for Installed Namespace . Select an Update Channel and Approval Strategy . Click Install . On the Installed Operators page, click ClusterResourceOverride . On the ClusterResourceOverride Operator details page, click Create ClusterResourceOverride . On the Create ClusterResourceOverride page, click YAML view and edit the YAML template to set the overcommit values as needed: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 # ... 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Click Create . Check the current state of the admission webhook by checking the status of the cluster custom resource: On the ClusterResourceOverride Operator page, click cluster . On the ClusterResourceOverride Details page, click YAML . The mutatingWebhookConfigurationRef section appears when the webhook is called. apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 6.10.2. Installing the Cluster Resource Override Operator using the CLI You can use the OpenShift Container Platform CLI to install the Cluster Resource Override Operator to help control overcommit in your cluster. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To install the Cluster Resource Override Operator using the CLI: Create a namespace for the Cluster Resource Override Operator: Create a Namespace object YAML file (for example, cro-namespace.yaml ) for the Cluster Resource Override Operator: apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator Create the namespace: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-namespace.yaml Create an Operator group: Create an OperatorGroup object YAML file (for example, cro-og.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator Create the Operator Group: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-og.yaml Create a subscription: Create a Subscription object YAML file (for example, cro-sub.yaml) for the Cluster Resource Override Operator: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: "stable" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-sub.yaml Create a ClusterResourceOverride custom resource (CR) object in the clusterresourceoverride-operator namespace: Change to the clusterresourceoverride-operator namespace. USD oc project clusterresourceoverride-operator Create a ClusterResourceOverride object YAML file (for example, cro-cr.yaml) for the Cluster Resource Override Operator: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4 1 The name must be cluster . 2 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 3 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 4 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Create the ClusterResourceOverride object: USD oc create -f <file-name>.yaml For example: USD oc create -f cro-cr.yaml Verify the current state of the admission webhook by checking the status of the cluster custom resource. USD oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml The mutatingWebhookConfigurationRef section appears when the webhook is called. Example output apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"operator.autoscaling.openshift.io/v1","kind":"ClusterResourceOverride","metadata":{"annotations":{},"name":"cluster"},"spec":{"podResourceOverride":{"spec":{"cpuRequestToLimitPercent":25,"limitCPUToMemoryPercent":200,"memoryRequestToLimitPercent":50}}}} creationTimestamp: "2019-12-18T22:35:02Z" generation: 1 name: cluster resourceVersion: "127622" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: # ... mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: "127621" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3 # ... 1 Reference to the ClusterResourceOverride admission webhook. 6.10.3. Configuring cluster-level overcommit The Cluster Resource Override Operator requires a ClusterResourceOverride custom resource (CR) and a label for each project where you want the Operator to control overcommit. Prerequisites The Cluster Resource Override Operator has no effect if limits have not been set on containers. You must specify default limits for a project using a LimitRange object or configure limits in Pod specs for the overrides to apply. Procedure To modify cluster-level overcommit: Edit the ClusterResourceOverride CR: apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3 # ... 1 Optional. Specify the percentage to override the container memory limit, if used, between 1-100. The default is 50. 2 Optional. Specify the percentage to override the container CPU limit, if used, between 1-100. The default is 25. 3 Optional. Specify the percentage to override the container memory limit, if used. Scaling 1Gi of RAM at 100 percent is equal to 1 CPU core. This is processed prior to overriding the CPU request, if configured. The default is 200. Ensure the following label has been added to the Namespace object for each project where you want the Cluster Resource Override Operator to control overcommit: apiVersion: v1 kind: Namespace metadata: # ... labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: "true" 1 # ... 1 Add this label to each project. 6.11. Node-level overcommit You can use various ways to control overcommit on specific nodes, such as quality of service (QOS) guarantees, CPU limits, or reserve resources. You can also disable overcommit for specific nodes and specific projects. 6.11.1. Understanding compute resources and containers The node-enforced behavior for compute resources is specific to the resource type. 6.11.1.1. Understanding container CPU requests A container is guaranteed the amount of CPU it requests and is additionally able to consume excess CPU available on the node, up to any limit specified by the container. If multiple containers are attempting to use excess CPU, CPU time is distributed based on the amount of CPU requested by each container. For example, if one container requested 500m of CPU time and another container requested 250m of CPU time, then any extra CPU time available on the node is distributed among the containers in a 2:1 ratio. If a container specified a limit, it will be throttled not to use more CPU than the specified limit. CPU requests are enforced using the CFS shares support in the Linux kernel. By default, CPU limits are enforced using the CFS quota support in the Linux kernel over a 100ms measuring interval, though this can be disabled. 6.11.1.2. Understanding container memory requests A container is guaranteed the amount of memory it requests. A container can use more memory than requested, but once it exceeds its requested amount, it could be terminated in a low memory situation on the node. If a container uses less memory than requested, it will not be terminated unless system tasks or daemons need more memory than was accounted for in the node's resource reservation. If a container specifies a limit on memory, it is immediately terminated if it exceeds the limit amount. 6.11.2. Understanding overcommitment and quality of service classes A node is overcommitted when it has a pod scheduled that makes no request, or when the sum of limits across all pods on that node exceeds available machine capacity. In an overcommitted environment, it is possible that the pods on the node will attempt to use more compute resource than is available at any given point in time. When this occurs, the node must give priority to one pod over another. The facility used to make this decision is referred to as a Quality of Service (QoS) Class. A pod is designated as one of three QoS classes with decreasing order of priority: Table 6.2. Quality of Service Classes Priority Class Name Description 1 (highest) Guaranteed If limits and optionally requests are set (not equal to 0) for all resources and they are equal, then the pod is classified as Guaranteed . 2 Burstable If requests and optionally limits are set (not equal to 0) for all resources, and they are not equal, then the pod is classified as Burstable . 3 (lowest) BestEffort If requests and limits are not set for any of the resources, then the pod is classified as BestEffort . Memory is an incompressible resource, so in low memory situations, containers that have the lowest priority are terminated first: Guaranteed containers are considered top priority, and are guaranteed to only be terminated if they exceed their limits, or if the system is under memory pressure and there are no lower priority containers that can be evicted. Burstable containers under system memory pressure are more likely to be terminated once they exceed their requests and no other BestEffort containers exist. BestEffort containers are treated with the lowest priority. Processes in these containers are first to be terminated if the system runs out of memory. 6.11.2.1. Understanding how to reserve memory across quality of service tiers You can use the qos-reserved parameter to specify a percentage of memory to be reserved by a pod in a particular QoS level. This feature attempts to reserve requested resources to exclude pods from lower OoS classes from using resources requested by pods in higher QoS classes. OpenShift Container Platform uses the qos-reserved parameter as follows: A value of qos-reserved=memory=100% will prevent the Burstable and BestEffort QoS classes from consuming memory that was requested by a higher QoS class. This increases the risk of inducing OOM on BestEffort and Burstable workloads in favor of increasing memory resource guarantees for Guaranteed and Burstable workloads. A value of qos-reserved=memory=50% will allow the Burstable and BestEffort QoS classes to consume half of the memory requested by a higher QoS class. A value of qos-reserved=memory=0% will allow a Burstable and BestEffort QoS classes to consume up to the full node allocatable amount if available, but increases the risk that a Guaranteed workload will not have access to requested memory. This condition effectively disables this feature. 6.11.3. Understanding swap memory and QOS You can disable swap by default on your nodes to preserve quality of service (QOS) guarantees. Otherwise, physical resources on a node can oversubscribe, affecting the resource guarantees the Kubernetes scheduler makes during pod placement. For example, if two guaranteed pods have reached their memory limit, each container could start using swap memory. Eventually, if there is not enough swap space, processes in the pods can be terminated due to the system being oversubscribed. Failing to disable swap results in nodes not recognizing that they are experiencing MemoryPressure , resulting in pods not receiving the memory they made in their scheduling request. As a result, additional pods are placed on the node to further increase memory pressure, ultimately increasing your risk of experiencing a system out of memory (OOM) event. Important If swap is enabled, any out-of-resource handling eviction thresholds for available memory will not work as expected. Take advantage of out-of-resource handling to allow pods to be evicted from a node when it is under memory pressure, and rescheduled on an alternative node that has no such pressure. 6.11.4. Understanding nodes overcommitment In an overcommitted environment, it is important to properly configure your node to provide best system behavior. When the node starts, it ensures that the kernel tunable flags for memory management are set properly. The kernel should never fail memory allocations unless it runs out of physical memory. To ensure this behavior, OpenShift Container Platform configures the kernel to always overcommit memory by setting the vm.overcommit_memory parameter to 1 , overriding the default operating system setting. OpenShift Container Platform also configures the kernel not to panic when it runs out of memory by setting the vm.panic_on_oom parameter to 0 . A setting of 0 instructs the kernel to call oom_killer in an Out of Memory (OOM) condition, which kills processes based on priority You can view the current setting by running the following commands on your nodes: USD sysctl -a |grep commit Example output #... vm.overcommit_memory = 0 #... USD sysctl -a |grep panic Example output #... vm.panic_on_oom = 0 #... Note The above flags should already be set on nodes, and no further action is required. You can also perform the following configurations for each node: Disable or enforce CPU limits using CPU CFS quotas Reserve resources for system processes Reserve memory across quality of service tiers 6.11.5. Disabling or enforcing CPU limits using CPU CFS quotas Nodes by default enforce specified CPU limits using the Completely Fair Scheduler (CFS) quota support in the Linux kernel. If you disable CPU limit enforcement, it is important to understand the impact on your node: If a container has a CPU request, the request continues to be enforced by CFS shares in the Linux kernel. If a container does not have a CPU request, but does have a CPU limit, the CPU request defaults to the specified CPU limit, and is enforced by CFS shares in the Linux kernel. If a container has both a CPU request and limit, the CPU request is enforced by CFS shares in the Linux kernel, and the CPU limit has no impact on the node. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: USD oc label machineconfigpool worker custom-kubelet=small-pods Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a disabling CPU limits apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: cpuCfsQuota: false 3 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Set the cpuCfsQuota parameter to false . Run the following command to create the CR: USD oc create -f <file_name>.yaml 6.11.6. Reserving resources for system processes To provide more reliable scheduling and minimize node resource overcommitment, each node can reserve a portion of its resources for use by system daemons that are required to run on your node for your cluster to function. In particular, it is recommended that you reserve resources for incompressible resources such as memory. Procedure To explicitly reserve resources for non-pod processes, allocate node resources by specifying resources available for scheduling. For more details, see Allocating Resources for Nodes. 6.11.7. Disabling overcommitment for a node When enabled, overcommitment can be disabled on each node. Procedure To disable overcommitment in a node run the following command on that node: USD sysctl -w vm.overcommit_memory=0 6.12. Project-level limits To help control overcommit, you can set per-project resource limit ranges, specifying memory and CPU limits and defaults for a project that overcommit cannot exceed. For information on project-level resource limits, see Additional resources. Alternatively, you can disable overcommitment for specific projects. 6.12.1. Disabling overcommitment for a project When enabled, overcommitment can be disabled per-project. For example, you can allow infrastructure components to be configured independently of overcommitment. Procedure Create or edit the namespace object file. Add the following annotation: apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: "false" <.> # ... <.> Setting this annotation to false disables overcommit for this namespace. 6.13. Freeing node resources using garbage collection Understand and use garbage collection. 6.13.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 6.3. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the evictionpressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. Note Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. You cannot set an eviction pressure transition period to zero seconds. 6.13.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 6.4. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 6.13.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 6.14. Using the Node Tuning Operator Understand and use the Node Tuning Operator. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 6.14.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.14.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: Node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: Machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/ocp-tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.14.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.14.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 6.15. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 6.16. Machine scaling with static IP addresses After you deployed your cluster to run nodes with static IP addresses, you can scale an instance of a machine or a machine set to use one of these static IP addresses. Additional resources Static IP addresses for vSphere nodes 6.16.1. Scaling machines to use static IP addresses You can scale additional machine sets to use pre-defined static IP addresses on your cluster. For this configuration, you need to create a machine resource YAML file and then define static IP addresses in this file. Prerequisites You deployed a cluster that runs at least one node with a configured static IP address. Procedure Create a machine resource YAML file and define static IP address network information in the network parameter. Example of a machine resource YAML file with static IP address information defined in the network parameter. apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - gateway: 192.168.204.1 1 ipAddrs: - 192.168.204.8/24 2 nameservers: 3 - 192.168.204.1 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: "" template: <vm_template_name> userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_ip> status: {} 1 The IP address for the default gateway for the network interface. 2 Lists IPv4, IPv6, or both IP addresses that installation program passes to the network interface. Both IP families must use the same network interface for the default network. 3 Lists a DNS nameserver. You can define up to 3 DNS nameservers. Consider defining more than one DNS nameserver to take advantage of DNS resolution if that one DNS nameserver becomes unreachable. Create a machine custom resource (CR) by entering the following command in your terminal: USD oc create -f <file_name>.yaml 6.16.2. Machine set scaling of machines with configured static IP addresses You can use a machine set to scale machines with configured static IP addresses. After you configure a machine set to request a static IP address for a machine, the machine controller creates an IPAddressClaim resource in the openshift-machine-api namespace. The external controller then creates an IPAddress resource and binds any static IP addresses to the IPAddressClaim resource. Important Your organization might use numerous types of IP address management (IPAM) services. If you want to enable a particular IPAM service on OpenShift Container Platform, you might need to manually create the IPAddressClaim resource in a YAML definition and then bind a static IP address to this resource by entering the following command in your oc CLI: USD oc create -f <ipaddressclaim_filename> The following demonstrates an example of an IPAddressClaim resource: kind: IPAddressClaim metadata: finalizers: - machine.openshift.io/ip-claim-protection name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 namespace: openshift-machine-api spec: poolRef: apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool status: {} The machine controller updates the machine with a status of IPAddressClaimed to indicate that a static IP address has successfully bound to the IPAddressClaim resource. The machine controller applies the same status to a machine with multiple IPAddressClaim resources that each contain a bound static IP address.The machine controller then creates a virtual machine and applies static IP addresses to any nodes listed in the providerSpec of a machine's configuration. 6.16.3. Using a machine set to scale machines with configured static IP addresses You can use a machine set to scale machines with configured static IP addresses. The example in the procedure demonstrates the use of controllers for scaling machines in a machine set. Prerequisites You deployed a cluster that runs at least one node with a configured static IP address. Procedure Configure a machine set by specifying IP pool information in the network.devices.addressesFromPools schema of the machine set's YAML file: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/memoryMb: "8192" machine.openshift.io/vCPU: "4" labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: replicas: 0 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: ipam: "true" machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: {} network: devices: - addressesFromPools: 1 - group: ipamcontroller.example.io name: static-ci-pool resource: IPPool nameservers: - "192.168.204.1" 2 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: "" template: rvanderp4-dev-9n5wg-rhcos-generated-region-generated-zone userDataSecret: name: worker-user-data workspace: datacenter: IBMCdatacenter datastore: /IBMCdatacenter/datastore/vsanDatastore folder: /IBMCdatacenter/vm/rvanderp4-dev-9n5wg resourcePool: /IBMCdatacenter/host/IBMCcluster//Resources server: vcenter.ibmc.devcluster.openshift.com 1 Specifies an IP pool, which lists a static IP address or a range of static IP addresses. The IP Pool can either be a reference to a custom resource definition (CRD) or a resource supported by the IPAddressClaims resource handler. The machine controller accesses static IP addresses listed in the machine set's configuration and then allocates each address to each machine. 2 Lists a nameserver. You must specify a nameserver for nodes that receive static IP address, because the Dynamic Host Configuration Protocol (DHCP) network configuration does not support static IP addresses. Scale the machine set by entering the following commands in your oc CLI: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api After each machine is scaled up, the machine controller creates an IPAddressClaim resource. Optional: Check that the IPAddressClaim resource exists in the openshift-machine-api namespace by entering the following command: USD oc get ipaddressclaims.ipam.cluster.x-k8s.io -n openshift-machine-api Example oc CLI output that lists two IP pools listed in the openshift-machine-api namespace NAME POOL NAME POOL KIND cluster-dev-9n5wg-worker-0-m7529-claim-0-0 static-ci-pool IPPool cluster-dev-9n5wg-worker-0-wdqkt-claim-0-0 static-ci-pool IPPool Create an IPAddress resource by entering the following command: USD oc create -f ipaddress.yaml The following example shows an IPAddress resource with defined network configuration information and one defined static IP address: apiVersion: ipam.cluster.x-k8s.io/v1alpha1 kind: IPAddress metadata: name: cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0 namespace: openshift-machine-api spec: address: 192.168.204.129 claimRef: 1 name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 gateway: 192.168.204.1 poolRef: 2 apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool prefix: 23 1 The name of the target IPAddressClaim resource. 2 Details information about the static IP address or addresses from your nodes. Note By default, the external controller automatically scans any resources in the machine set for recognizable address pool types. When the external controller finds kind: IPPool defined in the IPAddress resource, the controller binds any static IP addresses to the IPAddressClaim resource. Update the IPAddressClaim status with a reference to the IPAddress resource: USD oc --type=merge patch IPAddressClaim cluster-dev-9n5wg-worker-0-m7529-claim-0-0 -p='{"status":{"addressRef": {"name": "cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0"}}}' -n openshift-machine-api --subresource=status
[ "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.16-for-rhel-8-x86_64-rpms\"", "yum install openshift-ansible openshift-clients jq", "subscription-manager register --username=<user_name> --password=<password>", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --disable=\"*\"", "yum repolist", "yum-config-manager --disable <repo_id>", "yum-config-manager --disable \\*", "subscription-manager repos --enable=\"rhel-8-for-x86_64-baseos-rpms\" --enable=\"rhel-8-for-x86_64-appstream-rpms\" --enable=\"rhocp-4.16-for-rhel-8-x86_64-rpms\" --enable=\"fast-datapath-for-rhel-8-x86_64-rpms\"", "systemctl disable --now firewalld.service", "[all:vars] ansible_user=root 1 #ansible_become=True 2 openshift_kubeconfig_path=\"~/.kube/config\" 3 [new_workers] 4 mycluster-rhel8-0.example.com mycluster-rhel8-1.example.com", "cd /usr/share/ansible/openshift-ansible", "ansible-playbook -i /<path>/inventory/hosts playbooks/scaleup.yml 1", "oc get nodes -o wide", "oc adm cordon <node_name> 1", "oc adm drain <node_name> --force --delete-emptydir-data --ignore-daemonsets 1", "oc delete nodes <node_name> 1", "oc get nodes -o wide", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k http://<HTTP_server>/worker.ign", "RHCOS_VHD_ORIGIN_URL=USD(oc -n openshift-machine-config-operator get configmap/coreos-bootimages -o jsonpath='{.data.stream}' | jq -r '.architectures.<architecture>.artifacts.metal.formats.iso.disk.location')", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img 2", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/worker.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.29.4 master-1 Ready master 63m v1.29.4 master-2 Ready master 64m v1.29.4", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.29.4 master-1 Ready master 73m v1.29.4 master-2 Ready master 74m v1.29.4 worker-0 Ready worker 11m v1.29.4 worker-1 Ready worker 11m v1.29.4", "oc project openshift-machine-api", "oc get secret worker-user-data --template='{{index .data.userData | base64decode}}' | jq > userData.txt", "{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"https:....\" } ] }, \"security\": { \"tls\": { \"certificateAuthorities\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,.....==\" } ] } }, \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"/dev/nvme1n1\", 1 \"partitions\": [ { \"label\": \"var\", \"sizeMiB\": 50000, 2 \"startMiB\": 0 3 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/var\", 4 \"format\": \"xfs\", 5 \"path\": \"/var\" 6 } ] }, \"systemd\": { \"units\": [ 7 { \"contents\": \"[Unit]\\nBefore=local-fs.target\\n[Mount]\\nWhere=/var\\nWhat=/dev/disk/by-partlabel/var\\nOptions=defaults,pquota\\n[Install]\\nWantedBy=local-fs.target\\n\", \"enabled\": true, \"name\": \"var.mount\" } ] } }", "oc get secret worker-user-data --template='{{index .data.disableTemplating | base64decode}}' | jq > disableTemplating.txt", "oc create secret generic worker-user-data-x5 --from-file=userData=userData.txt --from-file=disableTemplating=disableTemplating.txt", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 name: worker-us-east-2-nvme1n1 1 namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b template: metadata: labels: machine.openshift.io/cluster-api-cluster: auto-52-92tf4 machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: auto-52-92tf4-worker-us-east-2b spec: metadata: {} providerSpec: value: ami: id: ami-0c2dbd95931a apiVersion: awsproviderconfig.openshift.io/v1beta1 blockDevices: - DeviceName: /dev/nvme1n1 2 ebs: encrypted: true iops: 0 volumeSize: 120 volumeType: gp2 - DeviceName: /dev/nvme1n2 3 ebs: encrypted: true iops: 0 volumeSize: 50 volumeType: gp2 credentialsSecret: name: aws-cloud-credentials deviceIndex: 0 iamInstanceProfile: id: auto-52-92tf4-worker-profile instanceType: m6i.large kind: AWSMachineProviderConfig metadata: creationTimestamp: null placement: availabilityZone: us-east-2b region: us-east-2 securityGroups: - filters: - name: tag:Name values: - auto-52-92tf4-worker-sg subnet: id: subnet-07a90e5db1 tags: - name: kubernetes.io/cluster/auto-52-92tf4 value: owned userDataSecret: name: worker-user-data-x5 4", "oc create -f <file-name>.yaml", "oc get machineset", "NAME DESIRED CURRENT READY AVAILABLE AGE ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1a 1 1 1 1 124m ci-ln-2675bt2-76ef8-bdgsc-worker-us-east-1b 2 2 2 2 124m worker-us-east-2-nvme1n1 1 1 1 1 2m35s 1", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-128-78.ec2.internal Ready worker 117m v1.29.4 ip-10-0-146-113.ec2.internal Ready master 127m v1.29.4 ip-10-0-153-35.ec2.internal Ready worker 118m v1.29.4 ip-10-0-176-58.ec2.internal Ready master 126m v1.29.4 ip-10-0-217-135.ec2.internal Ready worker 2m57s v1.29.4 1 ip-10-0-225-248.ec2.internal Ready master 127m v1.29.4 ip-10-0-245-59.ec2.internal Ready worker 116m v1.29.4", "oc debug node/<node-name> -- chroot /host lsblk", "oc debug node/ip-10-0-217-135.ec2.internal -- chroot /host lsblk", "NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT nvme0n1 202:0 0 120G 0 disk |-nvme0n1p1 202:1 0 1M 0 part |-nvme0n1p2 202:2 0 127M 0 part |-nvme0n1p3 202:3 0 384M 0 part /boot `-nvme0n1p4 202:4 0 119.5G 0 part /sysroot nvme1n1 202:16 0 50G 0 disk `-nvme1n1p1 202:17 0 48.8G 0 part /var 1", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "apiVersion: machine.openshift.io/v1beta1 kind: MachineHealthCheck metadata: name: example 1 namespace: openshift-machine-api spec: selector: matchLabels: machine.openshift.io/cluster-api-machine-role: <role> 2 machine.openshift.io/cluster-api-machine-type: <role> 3 machine.openshift.io/cluster-api-machineset: <cluster_name>-<label>-<zone> 4 unhealthyConditions: - type: \"Ready\" timeout: \"300s\" 5 status: \"False\" - type: \"Ready\" timeout: \"300s\" 6 status: \"Unknown\" maxUnhealthy: \"40%\" 7 nodeStartupTimeout: \"10m\" 8", "oc apply -f healthcheck.yml", "oc get machinesets.machine.openshift.io -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api", "oc annotate machines.machine.openshift.io/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "oc edit machinesets.machine.openshift.io <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines.machine.openshift.io", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: infra spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,infra]}", "oc get kubeletconfig", "NAME AGE set-kubelet-config 15m", "oc get mc | grep kubelet", "99-worker-generated-kubelet-1 b5c5119de007945b6fe6fb215db3b8e2ceb12511 3.2.0 26m", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: set-kubelet-config 1", "oc label machineconfigpool worker custom-kubelet=set-kubelet-config", "oc get machineconfig", "oc describe node <node_name>", "oc describe node ci-ln-5grqprb-f76d1-ncnqq-worker-a-mdv94", "Allocatable: attachable-volumes-aws-ebs: 25 cpu: 3500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 15341844Ki pods: 250", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config 1 kubeletConfig: 2 podPidsLimit: 8192 containerLogMaxSize: 50Mi maxPods: 500", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-config spec: machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config kubeletConfig: maxPods: <pod_count> kubeAPIBurst: <burst_rate> kubeAPIQPS: <QPS>", "oc label machineconfigpool worker custom-kubelet=set-kubelet-config", "oc create -f change-maxPods-cr.yaml", "oc get kubeletconfig", "NAME AGE set-kubelet-config 15m", "oc describe node <node_name>", "Allocatable: attachable-volumes-gce-pd: 127 cpu: 3500m ephemeral-storage: 123201474766 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 14225400Ki pods: 500 1", "oc get kubeletconfigs set-kubelet-config -o yaml", "spec: kubeletConfig: containerLogMaxSize: 50Mi maxPods: 500 podPidsLimit: 8192 machineConfigPoolSelector: matchLabels: custom-kubelet: set-kubelet-config status: conditions: - lastTransitionTime: \"2021-06-30T17:04:07Z\" message: Success status: \"True\" type: Success", "oc edit machineconfigpool worker", "spec: maxUnavailable: <node_count>", "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc new-project <project_name>", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: cpumanager image: gcr.io/google_containers/pause:3.2 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "oc describe node --selector='cpumanager=true' | grep -i cpumanager- -B2", "NAMESPACE NAME CPU Requests CPU Limits Memory Requests Memory Limits Age cpuman cpumanager-mlrrz 1 (28%) 1 (28%) 1G (13%) 1G (13%) 27m", "oc debug node/perf-node.example.com", "sh-4.2# systemctl status | grep -B5 pause", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope", "for i in `ls cpuset.cpus cgroup.procs` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus", "oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s", "apiVersion: v1 kind: Pod metadata: generateName: hugepages-volume- spec: containers: - securityContext: privileged: true image: rhel7:latest command: - sleep - inf name: example volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: hugepages-2Mi: 100Mi 1 memory: \"1Gi\" cpu: \"1\" volumes: - name: hugepage emptyDir: medium: HugePages", "oc label node <node_using_hugepages> node-role.kubernetes.io/worker-hp=", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: hugepages 1 namespace: openshift-cluster-node-tuning-operator spec: profile: 2 - data: | [main] summary=Boot time configuration for hugepages include=openshift-node [bootloader] cmdline_openshift_node_hugepages=hugepagesz=2M hugepages=50 3 name: openshift-node-hugepages recommend: - machineConfigLabels: 4 machineconfiguration.openshift.io/role: \"worker-hp\" priority: 30 profile: openshift-node-hugepages", "oc create -f hugepages-tuned-boottime.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-hp labels: worker-hp: \"\" spec: machineConfigSelector: matchExpressions: - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,worker-hp]} nodeSelector: matchLabels: node-role.kubernetes.io/worker-hp: \"\"", "oc create -f hugepages-mcp.yaml", "oc get node <node_using_hugepages> -o jsonpath=\"{.status.allocatable.hugepages-2Mi}\" 100Mi", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 key1=value1:NoExecute", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit machineset <machineset>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc adm taint nodes node1 dedicated=groupName:NoSchedule", "kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #", "oc adm taint nodes <node-name> disktype=ssd:NoSchedule", "oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule", "kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #", "oc adm taint nodes <node-name> <key>-", "oc adm taint nodes ip-10-0-132-248.ec2.internal key1-", "node/ip-10-0-132-248.ec2.internal untainted", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit KubeletConfig cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s topologyManagerPolicy: single-numa-node 2", "spec: containers: - name: nginx image: nginx", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" requests: memory: \"100Mi\"", "spec: containers: - name: nginx image: nginx resources: limits: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\" requests: memory: \"200Mi\" cpu: \"2\" example.com/device: \"1\"", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "apiVersion: machine.openshift.io/v1beta1 kind: Machine metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: creationTimestamp: null network: devices: - gateway: 192.168.204.1 1 ipAddrs: - 192.168.204.8/24 2 nameservers: 3 - 192.168.204.1 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: \"\" template: <vm_template_name> userDataSecret: name: worker-user-data workspace: datacenter: <vcenter_data_center_name> datastore: <vcenter_datastore_name> folder: <vcenter_vm_folder_path> resourcepool: <vsphere_resource_pool> server: <vcenter_server_ip> status: {}", "oc create -f <file_name>.yaml", "oc create -f <ipaddressclaim_filename>", "kind: IPAddressClaim metadata: finalizers: - machine.openshift.io/ip-claim-protection name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 namespace: openshift-machine-api spec: poolRef: apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool status: {}", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: annotations: machine.openshift.io/memoryMb: \"8192\" machine.openshift.io/vCPU: \"4\" labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: replicas: 0 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: ipam: \"true\" machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: lifecycleHooks: {} metadata: {} providerSpec: value: apiVersion: machine.openshift.io/v1beta1 credentialsSecret: name: vsphere-cloud-credentials diskGiB: 120 kind: VSphereMachineProviderSpec memoryMiB: 8192 metadata: {} network: devices: - addressesFromPools: 1 - group: ipamcontroller.example.io name: static-ci-pool resource: IPPool nameservers: - \"192.168.204.1\" 2 networkName: qe-segment-204 numCPUs: 4 numCoresPerSocket: 2 snapshot: \"\" template: rvanderp4-dev-9n5wg-rhcos-generated-region-generated-zone userDataSecret: name: worker-user-data workspace: datacenter: IBMCdatacenter datastore: /IBMCdatacenter/datastore/vsanDatastore folder: /IBMCdatacenter/vm/rvanderp4-dev-9n5wg resourcePool: /IBMCdatacenter/host/IBMCcluster//Resources server: vcenter.ibmc.devcluster.openshift.com", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc get ipaddressclaims.ipam.cluster.x-k8s.io -n openshift-machine-api", "NAME POOL NAME POOL KIND cluster-dev-9n5wg-worker-0-m7529-claim-0-0 static-ci-pool IPPool cluster-dev-9n5wg-worker-0-wdqkt-claim-0-0 static-ci-pool IPPool", "oc create -f ipaddress.yaml", "apiVersion: ipam.cluster.x-k8s.io/v1alpha1 kind: IPAddress metadata: name: cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0 namespace: openshift-machine-api spec: address: 192.168.204.129 claimRef: 1 name: cluster-dev-9n5wg-worker-0-m7529-claim-0-0 gateway: 192.168.204.1 poolRef: 2 apiGroup: ipamcontroller.example.io kind: IPPool name: static-ci-pool prefix: 23", "oc --type=merge patch IPAddressClaim cluster-dev-9n5wg-worker-0-m7529-claim-0-0 -p='{\"status\":{\"addressRef\": {\"name\": \"cluster-dev-9n5wg-worker-0-m7529-ipaddress-0-0\"}}}' -n openshift-machine-api --subresource=status" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/postinstallation_configuration/post-install-node-tasks
Chapter 50. Virtualization
Chapter 50. Virtualization USB 3.0 support for KVM guests USB 3.0 host adapter (xHCI) emulation for KVM guests remains a Technology Preview in Red Hat Enterprise Linux 7.4. (BZ#1103193) Select Intel network adapters now support SR-IOV as a guest on Hyper-V In this update for Red Hat Enterprise Linux guest virtual machines running on Hyper-V, a new PCI passthrough driver adds the ability to use the single-root I/O virtualization (SR-IOV) feature for Intel network adapters supported by the ixgbevf driver. This ability is enabled when the following conditions are met: SR-IOV support is enabled for the network interface controller (NIC) SR-IOV support is enabled for the virtual NIC SR-IOV support is enabled for the virtual switch The virtual function (VF) from the NIC is attached to the virtual machine. The feature is currently supported with Microsoft Windows Server 2016. (BZ#1348508) No-IOMMU mode for VFIO drivers As a Technology Preview, this update adds No-IOMMU mode for virtual function I/O (VFIO) drivers. The No-IOMMU mode provides the user with full user-space I/O (UIO) access to a direct memory access (DMA)-capable device without a I/O memory management unit (IOMMU). Note that in addition to not being supported, using this mode is not secure due to the lack of I/O management provided by IOMMU. (BZ# 1299662 ) The ibmvnic Device Driver has been added The ibmvnic Device Driver was introduced as a Technology Preview in Red Hat Enterprise Linux 7.3 for IBM POWER architectures. vNIC (Virtual Network Interface Controller) is a new PowerVM virtual networking technology that delivers enterprise capabilities and simplifies network management. It is a high-performance, efficient technology that when combined with SR-IOV NIC provides bandwidth control Quality of Service (QoS) capabilities at the virtual NIC level. vNIC significantly reduces virtualization overhead, resulting in lower latencies and fewer server resources, including CPU and memory, required for network virtualization. (BZ#947163) virt-v2v can now use vmx configuration files to convert VMware guests As a Technology Preview, the virt-v2v utility now includes the vmx input mode, which enables the user to convert a guest virtual machine from a VMware vmx configuration file. Note that to do this, you also need access to the corresponding VMware storage, for example by mounting the storage using NFS. (BZ# 1441197 ) virt-v2v can convert Debian and Ubuntu guests As a technology preview, the virt-v2v utility can now convert Debian and Ubuntu guest virtual machines. Note that the following problems currently occur when performing this conversion: virt-v2v cannot change the default kernel in the GRUB2 configuration, and the kernel configured in the guest is not changed during the conversion, even if a more optimal version of the kernel is available on the guest. After converting a Debian or Ubuntu VMware guest to KVM, the name of the guest's network interface may change, and thus requires manual configuration. (BZ# 1387213 ) Virtio devices can now use vIOMMU As a Technology Preview, this update enables virtio devices to use virtual Input/Output Memory Management Unit (vIOMMU). This guarantees the security of Direct Memory Access (DMA) by allowing the device to DMA only to permitted addresses. However, note that only guest virtual machines using Red Hat Enterprise Linux 7.4 or later are able to use this feature. (BZ# 1283251 , BZ#1464891) Open Virtual Machine Firmware The Open Virtual Machine Firmware (OVMF) is available as a Technology Preview in Red Hat Enterprise Linux 7. OVMF is a UEFI secure boot environment for AMD64 and Intel 64 guests. However, OVMF is not bootable with virtualization components available in RHEL 7. Note that OVMF is fully supported in RHEL 8. (BZ#653382)
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/technology_previews_virtualization
17.8. Configuring the bind-dyndb-ldap Plug-in
17.8. Configuring the bind-dyndb-ldap Plug-in The bind-dyndb-ldap system plug-in contains a DNS record cache for zones and a history of successful DNS resolutions. Maintaining the cache improves lookup performance in the Directory Server because it is not necessary to query the directory services every time there is a new DNS request. When this plug-in is installed and IdM is configured to manage DNS, then a new configuration section is added to the plug-in configuration. Example 17.11. Default dynamic-db Configuration This configuration uses implied default values for other plug-in behaviors, like how long it maintains the cache. The assumed, default configuration can be changed by adding arguments to the dynamic-db "ipa" entry. The additional parameters are listed in Table 17.4, "Additional bind-dyndb-ldap Configuration Parameters" . Note Both cache updates and new zone detection can be forced by reloading the name server: Table 17.4. Additional bind-dyndb-ldap Configuration Parameters Parameter Description Default Value cache_ttl Checks the DNS configuration in the Directory Server for new zones. 120 (seconds); this is defined in the bind-dyndb-ldap plug-in. zone_refresh Checks frequency, in seconds, that the server checks the DNS configuration in the Directory Server for new zones. 0 (disabled) psearch Enables persistent searches for the Directory Server so the BIND service immediately receives an update notification when a new DNS zone is added. yes 17.8.1. Changing the DNS Cache Setting To improve DNS performance, it may be necessary to change the cache setting. By default, DNS records are kept in cache and considered valid for 120 seconds. This means that if a DNS record changes, it will not (necessarily) be propagated to the name server for up to 120 seconds. If the Directory Server has a high traffic volume or if records do not change frequently, then the cache time can be increased to improve performance by adding the cache_ttl parameter. 17.8.2. Disabling Persistent Searches The DNS service receives its information through the bind-dyndb-ldap plug-in. The plug-in resolves only zones which were configured and enabled in the Directory Server when the name server started. When the name service restarts, the plug-in reloads its configuration and identifies any new zones or any new resource records. However, the bind-dyndb-ldap plug-in pulls zone and resource record information from the IdM LDAP directory, and it is possible to pull information from that directory apart from simply restarting the plug-in. The bind-dyndb-ldap plug-in searches for zone changes actively by keeping a persistent connection open to the Directory Server and immediately catching any changes. Persistent searches provide immediate notification of changes and maintain a local cache of the configuration data. Note A persistent search catches updates both to zones and to zone resource records. Because persistent searches leave an ongoing, open connection with the Directory Server, there can be some performance issues. Performance implications are covered in the Red Hat Directory Server Administrator's Guide . Persistent searches are enabled by default but can be disabled in the psearch argument:
[ "dynamic-db \"ipa\" { library \"ldap.so\"; arg \"uri ldapi://%2fvar%2frun%2fslapd-EXAMPLE.socket\"; arg \"base cn=dns,dc=example,dc=com\"; arg \"fake_mname server.example.com.\"; arg \"auth_method sasl\"; arg \"sasl_mech GSSAPI\"; arg \"sasl_user DNS/server.example.com\"; arg \"zone_refresh 0\"; arg \"psearch yes\"; arg \"serial_autoincrement 1\"; };", "arg \" argument value \";", "rndc reload", "dynamic-db \"ipa\" { arg \"cache_ttl 1800\"; };", "dynamic-db \"ipa\" { arg \"psearch no\"; };" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/bind-dyndb-ldap-config
Chapter 39. Adjusting ID ranges manually
Chapter 39. Adjusting ID ranges manually An IdM server generates unique user ID (UID) and group ID (GID) numbers. By creating and assigning different ID ranges to replicas, it also ensures that they never generate the same ID numbers. By default, this process is automatic. However, you can manually adjust the IdM ID range during the IdM server installation, or manually define a replica's DNA ID range. 39.1. ID ranges ID numbers are divided into ID ranges . Keeping separate numeric ranges for individual servers and replicas eliminates the chance that an ID number issued for an entry is already used by another entry on another server or replica. Note that there are two distinct types of ID ranges: The IdM ID range , which is assigned during the installation of the first server. This range cannot be modified after it is created. However, you can create a new IdM ID range in addition to the original one. For more information, see Automatic ID ranges assignment and Adding a new IdM ID range . The Distributed Numeric Assignment (DNA) ID ranges, which can be modified by the user. These have to fit within an existing IdM ID range. For more information, see Assigning DNA ID ranges manually . Replicas can also have a DNA ID range assigned. A replica uses its range when it runs out of IDs in its current range. ranges are not assigned automatically when a replica is deleted and you must assign them manually . The ranges are updated and shared between the server and replicas by the DNA plug-in, as part of the back end 389 Directory Server instance for the domain. The DNA range definition is set by two attributes: The server's available number: the low end of the DNA range The range size: the number of ID's in the DNA range The initial bottom range is set during the plug-in instance configuration. After that, the plug-in updates the bottom value. Breaking the available numbers into ranges allows the servers to continually assign numbers without overlapping with each other. 39.2. Automatic ID ranges assignment IdM ID ranges By default, an IdM ID range is automatically assigned during the IdM server installation. The ipa-server-install command randomly selects and assigns a range of 200,000 IDs from a total of 10,000 possible ranges. Selecting a random range in this way significantly reduces the probability of conflicting IDs in case you decide to merge two separate IdM domains in the future. Note This IdM ID range cannot be modified after it is created. You can only manually adjust the Distributed Numeric Assignment (DNA) ID ranges, using the commands described in Assigning DNA ID ranges manually . A DNA range matching the IdM ID range is automatically created during installation. DNA ID ranges If you have a single IdM server installed, it controls the whole DNA ID range. When you install a new replica and the replica requests its own DNA ID range, the initial ID range for the server splits and is distributed between the server and replica: the replica receives half of the remaining DNA ID range that is available on the initial server. The server and replica then use their respective portions of the original ID range for new user or group entries. Also, if the replica is close to depleting its allocated ID range and fewer than 100 IDs remain, the replica contacts the other available servers to request a new DNA ID range. Important When you install a replica, it does not immediately receive an ID range. A replica receives an ID range the first time the DNA plug-in is used, for example when you first add a user. If the initial server stops functioning before the replica requests a DNA ID range from it, the replica is unable to contact the server to request the ID range. Attempting to add a new user on the replica then fails. In such situations, you can find out what ID range is assigned to the disabled server , and assign an ID range to the replica manually . 39.3. Assigning the IdM ID range manually during server installation You can override the default behavior and set an IdM ID range manually instead of having it assigned randomly. Important Do not set ID ranges that include UID values of 1000 and lower; these values are reserved for system use. Also, do not set an ID range that would include the 0 value; the SSSD service does not handle the 0 ID value. Procedure You can define the IdM ID range manually during server installation by using the following two options with ipa-server-install : --idstart gives the starting value for UID and GID numbers. --idmax gives the maximum UID and GID number; by default, the value is the --idstart starting value plus 199,999. Verification To check if the ID range was assigned correctly, you can display the assigned IdM ID range by using the ipa idrange-find command: 39.4. Adding a new IdM ID range In some cases, you may want to create a new IdM ID range in addition to the original one; for example, when a replica has run out of IDs and the original IdM ID range is depleted. Important Adding a new IdM ID range does not create new DNA ID ranges automatically. You must assign new DNA ID ranges to replicas manually as needed. For more information about how to do this, see assigning DNA ID ranges manually . Procedure To create a new IdM ID range, use the ipa idrange-add command. You must specify the new range name, the first ID number of the range, the range size, and the first RID number of the primary and secondary RID range: Restart the Directory Server service on all IdM servers in the deployment: This ensures that when you create users with UIDs from the new range, they have security identifiers (SIDs) assigned. Optional: Update the ID range immediately: Clear the System Security Services Daemon (SSSD) cache: Restart the SSSD daemon: Note If you do not clear the SSSD cache and restart the service, SSSD only detects the new ID range when it updates the domain list and other configuration data stored on the IdM server. Verification You can check if the new range is set correctly by using the ipa idrange-find command: 39.5. The role of security and relative identifiers in IdM ID ranges An Identity Management (IdM) ID range is defined by several parameters: The range name The first POSIX ID of the range The range size: the number of IDs in the range The first relative identifier (RID) of the corresponding RID range The first RID of the secondary RID range You can view these values by using the ipa idrange-show command: Security identifiers The data from the ID ranges of the local domain are used by the IdM server internally to assign unique security identifiers (SIDs) to IdM users and groups. The SIDs are stored in the user and group objects. A user's SID consists of the following: The domain SID The user's relative identifier (RID), which is a four-digit 32-bit value appended to the domain SID For example, if the domain SID is S-1-5-21-123-456-789 and the RID of a user from this domain is 1008, then the user has the SID of S-1-5-21-123-456-789-1008. Relative identifiers The RID itself is computed in the following way: Subtract the first POSIX ID of the range from the user's POSIX UID, and add the first RID of the corresponding RID range to the result. For example, if the UID of idmuser is 196600008, the first POSIX ID is 196600000, and the first RID is 1000, then idmuser 's RID is 1008. Note The algorithm computing the user's RID checks if a given POSIX ID falls into the ID range allocated before it computes a corresponding RID. For example, if the first ID is 196600000 and the range size is 200000, then the POSIX ID of 1600000 is outside of the ID range and the algorithm does not compute a RID for it. Secondary relative identifiers In IdM, a POSIX UID can be identical to a POSIX GID. This means that if idmuser already exists with the UID of 196600008, you can still create a new idmgroup group with the GID of 196600008. However, a SID can define only one object, a user or a group. The SID of S-1-5-21-123-456-789-1008 that has already been created for idmuser cannot be shared with idmgroup . An alternative SID must be generated for idmgroup . IdM uses a secondary relative identifier , or secondary RID, to avoid conflicting SIDs. This secondary RID consists of the following: The secondary RID base A range size; by default identical with the base range size In the example above, the secondary RID base is set to 1000000. To compute the RID for the newly created idmgroup : subtract the first POSIX ID of the range from the user's POSIX UID, and add the first RID of the secondary RID range to the result. idmgroup is therefore assigned the RID of 1000008. Consequently, the SID of idmgroup is S-1-5-21-123-456-789-1000008. IdM uses the secondary RID to compute a SID only if a user or a group object was previously created with a manually set POSIX ID. Otherwise, automatic assignment prevents assigning the same ID twice. Additional resources Using Ansible to add a new local IdM ID range 39.6. Using Ansible to add a new local IdM ID range In some cases, you may want to create a new Identity Management (IdM) ID range in addition to the original one; for example, when a replica has run out of IDs and the original IdM ID range is depleted. The following example describes how to create a new IdM ID range by using an Ansible playbook. Note Adding a new IdM ID range does not create new DNA ID ranges automatically. You need to assign new DNA ID ranges manually as needed. For more information about how to do this, see Assigning DNA ID ranges manually . Prerequisites On the control node: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to your ~/ MyPlaybooks / directory: Create the idrange-present.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: SSH to ipaserver and restart the Directory Server: This ensures that when you create users with UIDs from the new range, they have security identifiers (SIDs) assigned. Optional: Update the ID range immediately: On ipaserver , clear the System Security Services Daemon (SSSD) cache: On ipaserver , restart the SSSD daemon: Note If you do not clear the SSSD cache and restart the service, SSSD only detects the new ID range when it updates the domain list and other configuration data stored on the IdM server. Verification You can check if the new range is set correctly by using the ipa idrange-find command: Additional resources The role of security and relative identifiers in IdM ID ranges 39.7. Removing an ID range after removing a trust to AD If you have removed a trust between your IdM and Active Directory (AD) environments, you might want to remove the ID range associated with it. Warning IDs allocated to ID ranges associated with trusted domains might still be used for ownership of files and directories on systems enrolled into IdM. If you remove the ID range that corresponds to an AD trust that you have removed, you will not be able to resolve the ownership of any files and directories owned by AD users. Prerequisites You have removed a trust to an AD environment. Procedure Display all the ID ranges that are currently in use: Identify the name of the ID range associated with the trust you have removed. The first part of the name of the ID range is the name of the trust, for example AD.EXAMPLE.COM_id_range . Remove the range: Restart the SSSD service to remove references to the ID range you have removed. 39.8. Displaying currently assigned DNA ID ranges You can display both the currently active Distributed Numeric Assignment (DNA) ID range on a server, as well as its DNA range if it has one assigned. Procedure To display which DNA ID ranges are configured for the servers in the topology, use the following commands: ipa-replica-manage dnarange-show displays the current DNA ID range that is set on all servers or, if you specify a server, only on the specified server, for example: ipa-replica-manage dnanextrange-show displays the DNA ID range currently set on all servers or, if you specify a server, only on the specified server, for example: 39.9. Manual ID range assignment In certain situations, it is necessary to manually assign a Distributed Numeric Assignment (DNA) ID range, for example when: A replica has run out of IDs and the IdM ID range is depleted A replica has exhausted the DNA ID range that was assigned to it, and requesting additional IDs failed because no more free IDs are available in the IdM range. To solve this situation, extend the DNA ID range assigned to the replica. You can do this in two ways: Shorten the DNA ID range assigned to a different replica, then assign the newly available values to the depleted replica. Create a new IdM ID range, then set a new DNA ID range for the replica within this created IdM range. For information about how to create a new IdM ID range, see Adding a new IdM ID range . A replica stopped functioning A replica's DNA ID range is not automatically retrieved when the replica stops functioning and must be deleted, which means the DNA ID range previously assigned to the replica becomes unavailable. You want to recover the DNA ID range and make it available for other replicas. To do this, find out what the ID range values are , before manually assigning that range to a different server. Also, to avoid duplicate UIDs or GIDs, make sure that no ID value from the recovered range was previously assigned to a user or group; you can do this by examining the UIDs and GIDs of existing users and groups. You can manually assign a DNA ID range to a replica using the commands in Assigning DNA ID ranges manually . Note If you assign a new DNA ID range, the UIDs of the already existing entries on the server or replica stay the same. This does not pose a problem because even if you change the current DNA ID range, IdM keeps a record of what ranges were assigned in the past. 39.10. Assigning DNA ID ranges manually In some cases, you may need to manually assign Distributed Numeric Assignment (DNA) ID ranges to existing replicas, for example to reassign a DNA ID range assigned to a non-functioning replica. For more information, see Manual ID range assignment . When adjusting a DNA ID range manually, make sure that the newly adjusted range is included in the IdM ID range; you can check this using the ipa idrange-find command. Otherwise, the command fails. Important Be careful not to create overlapping ID ranges. If any of the ID ranges you assign to servers or replicas overlap, it could result in two different servers assigning the same ID value to different entries. Prerequisites Optional. If you are recovering a DNA ID range from a non-functioning replica, first find the ID range using the commands described in Displaying currently assigned DNA ID ranges . Procedure To define the current DNA ID range for a specified server, use ipa-replica-manage dnarange-set : To define the DNA ID range for a specified server, use ipa-replica-manage dnanextrange-set : Verification You can check that the new DNA ranges are set correctly by using the commands described in Displaying the currently assigned DNA ID ranges .
[ "ipa idrange-find --------------- 1 range matched --------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range ---------------------------- Number of entries returned 1 ----------------------------", "ipa idrange-add IDM.EXAMPLE.COM_new_range --base-id 5000 --range-size 1000 --rid-base 300000 --secondary-rid-base 1300000 --type ipa-local ipa: WARNING: Service [email protected] requires restart on IPA server <all IPA servers> to apply configuration changes. ------------------------------------------ Added ID range \"IDM.EXAMPLE.COM_new_range\" ------------------------------------------ Range name: IDM.EXAMPLE.COM_new_range First Posix ID of the range: 5000 Number of IDs in the range: 1000 First RID of the corresponding RID range: 300000 First RID of the secondary RID range: 1300000 Range type: local domain range", "systemctl restart [email protected]", "sss_cache -E", "systemctl restart sssd", "ipa idrange-find ---------------- 2 ranges matched ---------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range Range name: IDM.EXAMPLE.COM_new_range First Posix ID of the range: 5000 Number of IDs in the range: 1000 First RID of the corresponding RID range: 300000 First RID of the secondary RID range: 1300000 Range type: local domain range ---------------------------- Number of entries returned 2 ----------------------------", "ipa idrange-show IDM.EXAMPLE.COM_id_range Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 196600000 Number of IDs in the range: 200000 First RID of the corresponding RID range: 1000 First RID of the secondary RID range: 1000000 Range type: local domain range", "cd ~/ MyPlaybooks /", "--- - name: Playbook to manage idrange hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure local idrange is present ipaidrange: ipaadmin_password: \"{{ ipaadmin_password }}\" name: new_id_range base_id: 12000000 range_size: 200000 rid_base: 1000000 secondary_rid_base: 200000000", "ansible-playbook --vault-password-file=password_file -v -i inventory idrange-present.yml", "systemctl restart [email protected]", "sss_cache -E", "systemctl restart sssd", "ipa idrange-find ---------------- 2 ranges matched ---------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range Range name: IDM.EXAMPLE.COM_new_id_range First Posix ID of the range: 12000000 Number of IDs in the range: 200000 Range type: local domain range ---------------------------- Number of entries returned 2 ----------------------------", "ipa idrange-find", "ipa idrange-del AD.EXAMPLE.COM_id_range", "systemctl restart sssd", "ipa-replica-manage dnarange-show serverA.example.com: 1001-1500 serverB.example.com: 1501-2000 serverC.example.com: No range set ipa-replica-manage dnarange-show serverA.example.com serverA.example.com: 1001-1500", "ipa-replica-manage dnanextrange-show serverA.example.com: 2001-2500 serverB.example.com: No on-deck range set serverC.example.com: No on-deck range set ipa-replica-manage dnanextrange-show serverA.example.com serverA.example.com: 2001-2500", "ipa-replica-manage dnarange-set serverA.example.com 1250-1499", "ipa-replica-manage dnanextrange-set serverB.example.com 1500-5000" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/adjusting-id-ranges-manually_managing-users-groups-hosts
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/release_notes_for_red_hat_fuse_7.13/making-open-source-more-inclusive
Chapter 11. Troubleshooting
Chapter 11. Troubleshooting This section describes resources for troubleshooting the Migration Toolkit for Containers (MTC). For known issues, see the MTC release notes . 11.1. MTC workflow You can migrate Kubernetes resources, persistent volume data, and internal container images to OpenShift Container Platform 4.16 by using the Migration Toolkit for Containers (MTC) web console or the Kubernetes API. MTC migrates the following resources: A namespace specified in a migration plan. Namespace-scoped resources: When the MTC migrates a namespace, it migrates all the objects and resources associated with that namespace, such as services or pods. Additionally, if a resource that exists in the namespace but not at the cluster level depends on a resource that exists at the cluster level, the MTC migrates both resources. For example, a security context constraint (SCC) is a resource that exists at the cluster level and a service account (SA) is a resource that exists at the namespace level. If an SA exists in a namespace that the MTC migrates, the MTC automatically locates any SCCs that are linked to the SA and also migrates those SCCs. Similarly, the MTC migrates persistent volumes that are linked to the persistent volume claims of the namespace. Note Cluster-scoped resources might have to be migrated manually, depending on the resource. Custom resources (CRs) and custom resource definitions (CRDs): MTC automatically migrates CRs and CRDs at the namespace level. Migrating an application with the MTC web console involves the following steps: Install the Migration Toolkit for Containers Operator on all clusters. You can install the Migration Toolkit for Containers Operator in a restricted environment with limited or no internet access. The source and target clusters must have network access to each other and to a mirror registry. Configure the replication repository, an intermediate object storage that MTC uses to migrate data. The source and target clusters must have network access to the replication repository during migration. If you are using a proxy server, you must configure it to allow network traffic between the replication repository and the clusters. Add the source cluster to the MTC web console. Add the replication repository to the MTC web console. Create a migration plan, with one of the following data migration options: Copy : MTC copies the data from the source cluster to the replication repository, and from the replication repository to the target cluster. Note If you are using direct image migration or direct volume migration, the images or volumes are copied directly from the source cluster to the target cluster. Move : MTC unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. The remote volume must be accessible to the source and target clusters. Note Although the replication repository does not appear in this diagram, it is required for migration. Run the migration plan, with one of the following options: Stage copies data to the target cluster without stopping the application. A stage migration can be run multiple times so that most of the data is copied to the target before migration. Running one or more stage migrations reduces the duration of the cutover migration. Cutover stops the application on the source cluster and moves the resources to the target cluster. Optional: You can clear the Halt transactions on the source cluster during migration checkbox. About MTC custom resources The Migration Toolkit for Containers (MTC) creates the following custom resources (CRs): MigCluster (configuration, MTC cluster): Cluster definition MigStorage (configuration, MTC cluster): Storage definition MigPlan (configuration, MTC cluster): Migration plan The MigPlan CR describes the source and target clusters, replication repository, and namespaces being migrated. It is associated with 0, 1, or many MigMigration CRs. Note Deleting a MigPlan CR deletes the associated MigMigration CRs. BackupStorageLocation (configuration, MTC cluster): Location of Velero backup objects VolumeSnapshotLocation (configuration, MTC cluster): Location of Velero volume snapshots MigMigration (action, MTC cluster): Migration, created every time you stage or migrate data. Each MigMigration CR is associated with a MigPlan CR. Backup (action, source cluster): When you run a migration plan, the MigMigration CR creates two Velero backup CRs on each source cluster: Backup CR #1 for Kubernetes objects Backup CR #2 for PV data Restore (action, target cluster): When you run a migration plan, the MigMigration CR creates two Velero restore CRs on the target cluster: Restore CR #1 (using Backup CR #2) for PV data Restore CR #2 (using Backup CR #1) for Kubernetes objects 11.2. Migration Toolkit for Containers custom resource manifests Migration Toolkit for Containers (MTC) uses the following custom resource (CR) manifests for migrating applications. 11.2.1. DirectImageMigration The DirectImageMigration CR copies images directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2 1 One or more namespaces containing images to be migrated. By default, the destination namespace has the same name as the source namespace. 2 Source namespace mapped to a destination namespace with a different name. 11.2.2. DirectImageStreamMigration The DirectImageStreamMigration CR copies image stream references directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace> 11.2.3. DirectVolumeMigration The DirectVolumeMigration CR copies persistent volumes (PVs) directly from the source cluster to the destination cluster. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration 1 Set to true to create namespaces for the PVs on the destination cluster. 2 Set to true to delete DirectVolumeMigrationProgress CRs after migration. The default is false so that DirectVolumeMigrationProgress CRs are retained for troubleshooting. 3 Update the cluster name if the destination cluster is not the host cluster. 4 Specify one or more PVCs to be migrated. 11.2.4. DirectVolumeMigrationProgress The DirectVolumeMigrationProgress CR shows the progress of the DirectVolumeMigration CR. apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: "1.0" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration 11.2.5. MigAnalytic The MigAnalytic CR collects the number of images, Kubernetes resources, and the persistent volume (PV) capacity from an associated MigPlan CR. You can configure the data that it collects. apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration 1 Optional: Returns the number of images. 2 Optional: Returns the number, kind, and API version of the Kubernetes resources. 3 Optional: Returns the PV capacity. 4 Returns a list of image names. The default is false so that the output is not excessively long. 5 Optional: Specify the maximum number of image names to return if listImages is true . 11.2.6. MigCluster The MigCluster CR defines a host, local, or remote cluster. apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: "1.0" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 # The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 # The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 # The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config 1 Update the cluster name if the migration-controller pod is not running on this cluster. 2 The migration-controller pod runs on this cluster if true . 3 Microsoft Azure only: Specify the resource group. 4 Optional: If you created a certificate bundle for self-signed CA certificates and if the insecure parameter value is false , specify the base64-encoded certificate bundle. 5 Set to true to disable SSL verification. 6 Set to true to validate the cluster. 7 Set to true to restart the Restic pods on the source cluster after the Stage pods are created. 8 Remote cluster and direct image migration only: Specify the exposed secure registry path. 9 Remote cluster only: Specify the URL. 10 Remote cluster only: Specify the name of the Secret object. 11.2.7. MigHook The MigHook CR defines a migration hook that runs custom code at a specified stage of the migration. You can create up to four migration hooks. Each hook runs during a different phase of the migration. You can configure the hook name, runtime duration, a custom image, and the cluster where the hook will run. The migration phases and namespaces of the hooks are configured in the MigPlan CR. apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7 1 Optional: A unique hash is appended to the value for this parameter so that each migration hook has a unique name. You do not need to specify the value of the name parameter. 2 Specify the migration hook name, unless you specify the value of the generateName parameter. 3 Optional: Specify the maximum number of seconds that a hook can run. The default is 1800 . 4 The hook is a custom image if true . The custom image can include Ansible or it can be written in a different programming language. 5 Specify the custom image, for example, quay.io/konveyor/hook-runner:latest . Required if custom is true . 6 Base64-encoded Ansible playbook. Required if custom is false . 7 Specify the cluster on which the hook will run. Valid values are source or destination . 11.2.8. MigMigration The MigMigration CR runs a MigPlan CR. You can configure a Migmigration CR to run a stage or incremental migration, to cancel a migration in progress, or to roll back a completed migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration 1 Set to true to cancel a migration in progress. 2 Set to true to roll back a completed migration. 3 Set to true to run a stage migration. Data is copied incrementally and the pods on the source cluster are not stopped. 4 Set to true to stop the application during migration. The pods on the source cluster are scaled to 0 after the Backup stage. 5 Set to true to retain the labels and annotations applied during the migration. 6 Set to true to check the status of the migrated pods on the destination cluster are checked and to return the names of pods that are not in a Running state. 11.2.9. MigPlan The MigPlan CR defines the parameters of a migration plan. You can configure destination namespaces, hook phases, and direct or indirect migration. Note By default, a destination namespace has the same name as the source namespace. If you configure a different destination namespace, you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges are copied during migration. apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: "1.0" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12 1 The migration has completed if true . You cannot create another MigMigration CR for this MigPlan CR. 2 Optional: You can specify up to four migration hooks. Each hook must run during a different migration phase. 3 Optional: Specify the namespace in which the hook will run. 4 Optional: Specify the migration phase during which a hook runs. One hook can be assigned to one phase. Valid values are PreBackup , PostBackup , PreRestore , and PostRestore . 5 Optional: Specify the name of the MigHook CR. 6 Optional: Specify the namespace of MigHook CR. 7 Optional: Specify a service account with cluster-admin privileges. 8 Direct image migration is disabled if true . Images are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 9 Direct volume migration is disabled if true . PVs are copied from the source cluster to the replication repository and from the replication repository to the destination cluster. 10 Specify one or more source namespaces. If you specify only the source namespace, the destination namespace is the same. 11 Specify the destination namespace if it is different from the source namespace. 12 The MigPlan CR is validated if true . 11.2.10. MigStorage The MigStorage CR describes the object storage for the replication repository. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Storage, Multi-Cloud Object Gateway, and generic S3-compatible cloud storage are supported. AWS and the snapshot copy method have additional parameters. apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: "1.0" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11 1 Specify the storage provider. 2 Snapshot copy method only: Specify the storage provider. 3 AWS only: Specify the bucket name. 4 AWS only: Specify the bucket region, for example, us-east-1 . 5 Specify the name of the Secret object that you created for the storage. 6 AWS only: If you are using the AWS Key Management Service, specify the unique identifier of the key. 7 AWS only: If you granted public access to the AWS bucket, specify the bucket URL. 8 AWS only: Specify the AWS signature version for authenticating requests to the bucket, for example, 4 . 9 Snapshot copy method only: Specify the geographical region of the clusters. 10 Snapshot copy method only: Specify the name of the Secret object that you created for the storage. 11 Set to true to validate the cluster. 11.3. Logs and debugging tools This section describes logs and debugging tools that you can use for troubleshooting. 11.3.1. Viewing migration plan resources You can view migration plan resources to monitor a running migration or to troubleshoot a failed migration by using the MTC web console and the command line interface (CLI). Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan to view the Migrations page. Click a migration to view the Migration details . Expand Migration resources to view the migration resources and their status in a tree view. Note To troubleshoot a failed migration, start with a high-level resource that has failed and then work down the resource tree towards the lower-level resources. Click the Options menu to a resource and select one of the following options: Copy oc describe command copies the command to your clipboard. Log in to the relevant cluster and then run the command. The conditions and events of the resource are displayed in YAML format. Copy oc logs command copies the command to your clipboard. Log in to the relevant cluster and then run the command. If the resource supports log filtering, a filtered log is displayed. View JSON displays the resource data in JSON format in a web browser. The data is the same as the output for the oc get <resource> command. 11.3.2. Viewing a migration plan log You can view an aggregated log for a migration plan. You use the MTC web console to copy a command to your clipboard and then run the command from the command line interface (CLI). The command displays the filtered logs of the following pods: Migration Controller Velero Restic Rsync Stunnel Registry Procedure In the MTC web console, click Migration Plans . Click the Migrations number to a migration plan. Click View logs . Click the Copy icon to copy the oc logs command to your clipboard. Log in to the relevant cluster and enter the command on the CLI. The aggregated log for the migration plan is displayed. 11.3.3. Using the migration log reader You can use the migration log reader to display a single filtered view of all the migration logs. Procedure Get the mig-log-reader pod: USD oc -n openshift-migration get pods | grep log Enter the following command to display a single migration log: USD oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1 1 The -c plain option displays the log without colors. 11.3.4. Accessing performance metrics The MigrationController custom resource (CR) records metrics and pulls them into on-cluster monitoring storage. You can query the metrics by using Prometheus Query Language (PromQL) to diagnose migration performance issues. All metrics are reset when the Migration Controller pod restarts. You can access the performance metrics and run queries by using the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Observe Metrics . Enter a PromQL query, select a time window to display, and click Run Queries . If your web browser does not display all the results, use the Prometheus console. 11.3.4.1. Provided metrics The MigrationController custom resource (CR) provides metrics for the MigMigration CR count and for its API requests. 11.3.4.1.1. cam_app_workload_migrations This metric is a count of MigMigration CRs over time. It is useful for viewing alongside the mtc_client_request_count and mtc_client_request_elapsed metrics to collate API request information with migration status changes. This metric is included in Telemetry. Table 11.1. cam_app_workload_migrations metric Queryable label name Sample label values Label description status running , idle , failed , completed Status of the MigMigration CR type stage, final Type of the MigMigration CR 11.3.4.1.2. mtc_client_request_count This metric is a cumulative count of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 11.2. mtc_client_request_count metric Queryable label name Sample label values Label description cluster https://migcluster-url:443 Cluster that the request was issued against component MigPlan , MigCluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes kind the request was issued for 11.3.4.1.3. mtc_client_request_elapsed This metric is a cumulative latency, in milliseconds, of Kubernetes API requests that MigrationController issued. It is not included in Telemetry. Table 11.3. mtc_client_request_elapsed metric Queryable label name Sample label values Label description cluster https://cluster-url.com:443 Cluster that the request was issued against component migplan , migcluster Sub-controller API that issued request function (*ReconcileMigPlan).Reconcile Function that the request was issued from kind SecretList , Deployment Kubernetes resource that the request was issued for 11.3.4.1.4. Useful queries The table lists some helpful queries that can be used for monitoring performance. Table 11.4. Useful queries Query Description mtc_client_request_count Number of API requests issued, sorted by request type sum(mtc_client_request_count) Total number of API requests issued mtc_client_request_elapsed API request latency, sorted by request type sum(mtc_client_request_elapsed) Total latency of API requests sum(mtc_client_request_elapsed) / sum(mtc_client_request_count) Average latency of API requests mtc_client_request_elapsed / mtc_client_request_count Average latency of API requests, sorted by request type cam_app_workload_migrations{status="running"} * 100 Count of running migrations, multiplied by 100 for easier viewing alongside request counts 11.3.5. Using the must-gather tool You can collect logs, metrics, and information about MTC custom resources by using the must-gather tool. The must-gather data must be attached to all customer cases. You can collect data for a one-hour or a 24-hour period and view the data with the Prometheus console. Prerequisites You must be logged in to the OpenShift Container Platform cluster as a user with the cluster-admin role. You must have the OpenShift CLI ( oc ) installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command for one of the following data collection options: To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 This command saves the data as the must-gather/must-gather.tar.gz file. You can upload this file to a support case on the Red Hat Customer Portal . To collect data for the past 24 hours, run the following command: USD oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump This operation can take a long time. This command saves the data as the must-gather/metrics/prom_data.tar.gz file. 11.3.6. Debugging Velero resources with the Velero CLI tool You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool. Syntax Use the oc exec command to run a Velero CLI command: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> <command> <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql Help option Use the velero --help option to list all Velero CLI commands: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ --help Describe command Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> describe <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql The following types of restore errors and warnings are shown in the output of a velero describe request: Velero : A list of messages related to the operation of Velero itself, for example, messages related to connecting to the cloud, reading a backup file, and so on Cluster : A list of messages related to backing up or restoring cluster-scoped resources Namespaces : A list of list of messages related to backing up or restoring resources stored in namespaces One or more errors in one of these categories results in a Restore operation receiving the status of PartiallyFailed and not Completed . Warnings do not lead to a change in the completion status. Important For resource-specific errors, that is, Cluster and Namespaces errors, the restore describe --details output includes a resource list that lists all resources that Velero succeeded in restoring. For any resource that has such an error, check to see if the resource is actually in the cluster. If there are Velero errors, but no resource-specific errors, in the output of a describe command, it is possible that the restore completed without any actual problems in restoring workloads, but carefully validate post-restore applications. For example, if the output contains PodVolumeRestore or node agent-related errors, check the status of PodVolumeRestores and DataDownloads . If none of these are failed or still running, then volume data might have been fully restored. Logs command Use the velero logs command to retrieve the logs of a Backup or Restore CR: USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ <backup_restore_cr> logs <cr_name> Example USD oc -n openshift-migration exec deployment/velero -c velero -- ./velero \ restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf 11.3.7. Debugging a partial migration failure You can debug a partial migration failure warning message by using the Velero CLI to examine the Restore custom resource (CR) logs. A partial failure occurs when Velero encounters an issue that does not cause a migration to fail. For example, if a custom resource definition (CRD) is missing or if there is a discrepancy between CRD versions on the source and target clusters, the migration completes but the CR is not created on the target cluster. Velero logs the issue as a partial failure and then processes the rest of the objects in the Backup CR. Procedure Check the status of a MigMigration CR: USD oc get migmigration <migmigration> -o yaml Example output status: conditions: - category: Warn durable: true lastTransitionTime: "2021-01-26T20:48:40Z" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: "True" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: "2021-01-26T20:48:42Z" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: "True" type: SucceededWithWarnings Check the status of the Restore CR by using the Velero describe command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore describe <restore> Example output Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource Check the Restore CR logs by using the Velero logs command: USD oc -n {namespace} exec deployment/velero -c velero -- ./velero \ restore logs <restore> Example output time="2021-01-26T20:48:37Z" level=info msg="Attempting to restore migration-example: migration-example" logSource="pkg/restore/restore.go:1107" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time="2021-01-26T20:48:37Z" level=info msg="error restoring migration-example: the server could not find the requested resource" logSource="pkg/restore/restore.go:1170" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf The Restore CR log error message, the server could not find the requested resource , indicates the cause of the partially failed migration. 11.3.8. Using MTC custom resources for troubleshooting You can check the following Migration Toolkit for Containers (MTC) custom resources (CRs) to troubleshoot a failed migration: MigCluster MigStorage MigPlan BackupStorageLocation The BackupStorageLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 VolumeSnapshotLocation The VolumeSnapshotLocation CR contains a migrationcontroller label to identify the MTC instance that created the CR: labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93 MigMigration Backup MTC changes the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup CR contains an openshift.io/orig-reclaim-policy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Restore Procedure List the MigMigration CRs in the openshift-migration namespace: USD oc get migmigration -n openshift-migration Example output NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s Inspect the MigMigration CR: USD oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration The output is similar to the following examples. MigMigration example output name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none> Velero backup CR #2 example output that describes the PV data apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: "2019-08-29T01:03:15Z" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: "87313" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: "2019-08-29T01:02:36Z" errors: 0 expiration: "2019-09-28T01:02:35Z" phase: Completed startTimestamp: "2019-08-29T01:02:35Z" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0 Velero restore CR #2 example output that describes the Kubernetes resources apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: "true" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: "2019-08-28T00:09:49Z" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: "82329" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: "" phase: Completed validationErrors: null warnings: 15 11.4. Common issues and concerns This section describes common issues and concerns that can cause issues during migration. 11.4.1. Direct volume migration does not complete If direct volume migration does not complete, the target cluster might not have the same node-selector annotations as the source cluster. Migration Toolkit for Containers (MTC) migrates namespaces with all annotations to preserve security context constraints and scheduling requirements. During direct volume migration, MTC creates Rsync transfer pods on the target cluster in the namespaces that were migrated from the source cluster. If a target cluster namespace does not have the same annotations as the source cluster namespace, the Rsync transfer pods cannot be scheduled. The Rsync pods remain in a Pending state. You can identify and fix this issue by performing the following procedure. Procedure Check the status of the MigMigration CR: USD oc describe migmigration <pod> -n openshift-migration The output includes the following status message: Example output Some or all transfer pods are not running for more than 10 mins on destination cluster On the source cluster, obtain the details of a migrated namespace: USD oc get namespace <namespace> -o yaml 1 1 Specify the migrated namespace. On the target cluster, edit the migrated namespace: USD oc edit namespace <namespace> Add the missing openshift.io/node-selector annotations to the migrated namespace as in the following example: apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: "region=east" ... Run the migration plan again. 11.4.2. Error messages and resolutions This section describes common error messages you might encounter with the Migration Toolkit for Containers (MTC) and how to resolve their underlying causes. 11.4.2.1. CA certificate error displayed when accessing the MTC console for the first time If a CA certificate error message is displayed the first time you try to access the MTC console, the likely cause is the use of self-signed CA certificates in one of the clusters. To resolve this issue, navigate to the oauth-authorization-server URL displayed in the error message and accept the certificate. To resolve this issue permanently, add the certificate to the trust store of your web browser. If an Unauthorized message is displayed after you have accepted the certificate, navigate to the MTC console and refresh the web page. 11.4.2.2. OAuth timeout error in the MTC console If a connection has timed out message is displayed in the MTC console after you have accepted a self-signed certificate, the causes are likely to be the following: Interrupted network access to the OAuth server Interrupted network access to the OpenShift Container Platform console Proxy configuration that blocks access to the oauth-authorization-server URL. See MTC console inaccessible because of OAuth timeout error for details. To determine the cause of the timeout: Inspect the MTC console web page with a browser web inspector. Check the Migration UI pod log for errors. 11.4.2.3. Certificate signed by unknown authority error If you use a self-signed certificate to secure a cluster or a replication repository for the MTC, certificate verification might fail with the following error message: Certificate signed by unknown authority . You can create a custom CA certificate bundle file and upload it in the MTC web console when you add a cluster or a replication repository. Procedure Download a CA certificate from a remote endpoint and save it as a CA bundle file: USD echo -n | openssl s_client -connect <host_FQDN>:<port> \ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2 1 Specify the host FQDN and port of the endpoint, for example, api.my-cluster.example.com:6443 . 2 Specify the name of the CA bundle file. 11.4.2.4. Backup storage location errors in the Velero pod log If a Velero Backup custom resource contains a reference to a backup storage location (BSL) that does not exist, the Velero pod log might display the following error messages: USD oc logs <Velero_Pod> -n openshift-migration Example output level=error msg="Error checking repository for stale locks" error="error getting backup storage location: BackupStorageLocation.velero.io \"ts-dpa-1\" not found" error.file="/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259" You can ignore these error messages. A missing BSL cannot cause a migration to fail. 11.4.2.5. Pod volume backup timeout error in the Velero pod log If a migration fails because Restic times out, the following error is displayed in the Velero pod log. level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete" error.file="/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165" error.function="github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes" group=v1 The default value of restic_timeout is one hour. You can increase this parameter for large migrations, keeping in mind that a higher value may delay the return of error messages. Procedure In the OpenShift Container Platform web console, navigate to Operators Installed Operators . Click Migration Toolkit for Containers Operator . In the MigrationController tab, click migration-controller . In the YAML tab, update the following parameter value: spec: restic_timeout: 1h 1 1 Valid units are h (hours), m (minutes), and s (seconds), for example, 3h30m15s . Click Save . 11.4.2.6. Restic verification errors in the MigMigration custom resource If data verification fails when migrating a persistent volume with the file system data copy method, the following error is displayed in the MigMigration CR. Example output status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: "True" type: ResticVerifyErrors 2 1 The error message identifies the Restore CR name. 2 ResticVerifyErrors is a general error warning type that includes verification errors. Note A data verification error does not cause the migration process to fail. You can check the Restore CR to identify the source of the data verification error. Procedure Log in to the target cluster. View the Restore CR: USD oc describe <registry-example-migration-rvwcm> -n openshift-migration The output identifies the persistent volume with PodVolumeRestore errors. Example output status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration View the PodVolumeRestore CR: USD oc describe <migration-example-rvwcm-98t49> The output identifies the Restic pod that logged the errors. Example output completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 ... resticPod: <restic-nr2v5> View the Restic pod log to locate the errors: USD oc logs -f <restic-nr2v5> 11.4.2.7. Restic permission error when migrating from NFS storage with root_squash enabled If you are migrating data from NFS storage and root_squash is enabled, Restic maps to nfsnobody and does not have permission to perform the migration. The following error is displayed in the Restic pod log. Example output backup=openshift-migration/<backup_id> controller=pod-volume-backup error="fork/exec /usr/bin/restic: permission denied" error.file="/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280" error.function="github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup" logSource="pkg/controller/pod_volume_backup_controller.go:280" name=<backup_id> namespace=openshift-migration You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the MigrationController CR manifest. Procedure Create a supplemental group for Restic on the NFS storage. Set the setgid bit on the NFS directories so that group ownership is inherited. Add the restic_supplemental_groups parameter to the MigrationController CR manifest on the source and target clusters: spec: restic_supplemental_groups: <group_id> 1 1 Specify the supplemental group ID. Wait for the Restic pods to restart so that the changes are applied. 11.4.3. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 11.4.3.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 11.4.3.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false . 11.5. Rolling back a migration You can roll back a migration by using the MTC web console or the CLI. You can also roll back a migration manually . 11.5.1. Rolling back a migration by using the MTC web console You can roll back a migration by using the Migration Toolkit for Containers (MTC) web console. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure In the MTC web console, click Migration plans . Click the Options menu beside a migration plan and select Rollback under Migration . Click Rollback and wait for rollback to complete. In the migration plan details, Rollback succeeded is displayed. Verify that rollback was successful in the OpenShift Container Platform web console of the source cluster: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volume is correctly provisioned. 11.5.2. Rolling back a migration from the command line interface You can roll back a migration by creating a MigMigration custom resource (CR) from the command line interface. Note The following resources remain in the migrated namespaces for debugging after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. If you later run the same migration plan successfully, the resources from the failed migration are deleted automatically. If your application was stopped during a failed migration, you must roll back the migration to prevent data corruption in the persistent volume. Rollback is not required if the application was not stopped during migration because the original application is still running on the source cluster. Procedure Create a MigMigration CR based on the following example: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: "1.0" name: <migmigration> namespace: openshift-migration spec: ... rollback: true ... migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF 1 Specify the name of the associated MigPlan CR. In the MTC web console, verify that the migrated project resources have been removed from the target cluster. Verify that the migrated project resources are present in the source cluster and that the application is running. 11.5.3. Rolling back a migration manually You can roll back a failed migration manually by deleting the stage pods and unquiescing the application. If you run the same migration plan successfully, the resources from the failed migration are deleted automatically. Note The following resources remain in the migrated namespaces after a failed direct volume migration (DVM): Config maps (source and destination clusters) Secret objects (source and destination clusters) Rsync CRs (source cluster) These resources do not affect rollback. You can delete them manually. Procedure Delete the stage pods on all clusters: USD oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1 1 Namespaces specified in the MigPlan CR. Unquiesce the application on the source cluster by scaling the replicas to their premigration number: USD oc scale deployment <deployment> --replicas=<premigration_replicas> The migration.openshift.io/preQuiesceReplicas annotation in the Deployment CR displays the premigration number of replicas: apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: "1" migration.openshift.io/preQuiesceReplicas: "1" Verify that the application pods are running on the source cluster: USD oc get pod -n <namespace> Additional resources Deleting Operators from a cluster using the web console
[ "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true 1 analyzeK8SResources: true 2 analyzePVCapacity: true 3 listImages: false 4 listImagesLimit: 50 5 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.8 -- /usr/bin/gather_metrics_dump", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "oc logs <Velero_Pod> -n openshift-migration", "level=error msg=\"Error checking repository for stale locks\" error=\"error getting backup storage location: BackupStorageLocation.velero.io \\\"ts-dpa-1\\\" not found\" error.file=\"/remote-source/src/github.com/vmware-tanzu/velero/pkg/restic/repository_manager.go:259\"", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function=\"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/migration_toolkit_for_containers/troubleshooting-mtc
Chapter 5. Migration
Chapter 5. Migration This chapter provides information on migrating to versions of components included in Red Hat Software Collections 3.8. 5.1. Migrating to MariaDB 10.5 The rh-mariadb105 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mariadb105 Software Collection does not conflict with the mysql or mariadb packages from the core systems. Unless the *-syspaths packages are installed (see below), it is possible to install the rh-mariadb105 Software Collection together with the mysql or mariadb packages. It is also possible to run both versions at the same time, however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Additionally, it is possible to install the rh-mariadb105 Software Collection while the rh-mariadb103 Collection is still installed and even running. The rh-mariadb105 Software Collection includes the rh-mariadb105-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mariadb105*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mariadb105* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb103 and rh-mysql80 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . The recommended migration path from MariaDB 5.5 to MariaDB 10.5 is to upgrade to MariaDB 10.0 first, and then upgrade by one version successively. For details, see instructions in earlier Red Hat Software Collections Release Notes: Migrating to MariaDB 10.0 , Migrating to MariaDB 10.1 , Migrating to MariaDB 10.2 , and Migrating to MariaDB 10.3 . Note that MariaDB 10.4 is not available as a Software Collection, so you must migrate directly from rh-mariadb103 to rh-mariadb105 . Note The rh-mariadb105 Software Collection supports neither mounting over NFS nor dynamical registering using the scl register command. 5.1.1. Notable Differences Between the rh-mariadb103 and rh-mariadb105 Software Collections Significant changes between MariaDB 10.3 and MariaDB 10.5 include: MariaDB now uses the unix_socket authentication plug-in by default. The plug-in enables users to use operating system credentials when connecting to MariaDB through the local Unix socket file. MariaDB adds mariadb-* named binaries and mysql* symbolic links pointing to the mariadb-* binaires. For example, the mysqladmin , mysqlaccess , and mysqlshow symlinks point to the mariadb-admin , mariadb-access , and mariadb-show binaries, respectively. The SUPER privilege has been split into several privileges to better align with each user role. As a result, certain statements have changed required privileges. In parallel replication, the slave_parallel_mode now defaults to optimistic . In the InnoDB storage engine, defaults of the following variables have been changed: innodb_adaptive_hash_index to OFF and innodb_checksum_algorithm to full_crc32 . MariaDB now uses the libedit implementation of the underlying software managing the MariaDB command history (the .mysql_history file) instead of the previously used readline library. This change impacts users working directly with the .mysql_history file. Note that .mysql_history is a file managed by the MariaDB or MySQL applications, and users should not work with the file directly. The human-readable appearance is coincidental. Note To increase security, you can consider not maintaining a history file. To disable the command history recording: Remove the .mysql_history file if it exists. Use either of the following approaches: Set the MYSQL_HISTFILE variable to /dev/null and include this setting in any of your shell's startup files. Change the .mysql_history file to a symbolic link to /dev/null : ln -s /dev/null USDHOME/.mysql_history MariaDB Galera Cluster has been upgraded to version 4 with the following notable changes: Galera adds a new streaming replication feature, which supports replicating transactions of unlimited size. During an execution of streaming replication, a cluster replicates a transaction in small fragments. Galera now fully supports Global Transaction ID (GTID). The default value for the wsrep_on option in the /etc/my.cnf.d/galera.cnf file has changed from 1 to 0 to prevent end users from starting wsrep replication without configuring required additional options. Changes to the PAM plug-in in MariaDB 10.5 include: MariaDB 10.5 adds a new version of the Pluggable Authentication Modules (PAM) plug-in. The PAM plug-in version 2.0 performs PAM authentication using a separate setuid root helper binary, which enables MariaDB to utilize additional PAM modules. The helper binary can be executed only by users in the mysql group. By default, the group contains only the mysql user. Red Hat recommends that administrators do not add more users to the mysql group to prevent password-guessing attacks without throttling or logging through this helper utility. In MariaDB 10.5 , the Pluggable Authentication Modules (PAM) plug-in and its related files have been moved to a new subpackage, mariadb-pam . As a result, no new setuid root binary is introduced on systems that do not use PAM authentication for MariaDB . The rh-mariadb105-mariadb-pam package contains both PAM plug-in versions: version 2.0 is the default, and version 1.0 is available as the auth_pam_v1 shared object library. The rh-mariadb105-mariadb-pam package is not installed by default with the MariaDB server. To make the PAM authentication plug-in available in MariaDB 10.5 , install the rh-mariadb105-mariadb-pam package manually. For more information, see the upstream documentation about changes in MariaDB 10.4 and changes in MariaDB 10.5 . See also upstream information about upgrading to MariaDB 10.4 and upgrading to MariaDB 10.5 . 5.1.2. Upgrading from the rh-mariadb103 to the rh-mariadb105 Software Collection Important Prior to upgrading, back up all your data, including any MariaDB databases. Stop the rh-mariadb103 database server if it is still running. Before stopping the server, set the innodb_fast_shutdown option to 0 , so that InnoDB performs a slow shutdown, including a full purge and insert buffer merge. Read more about this option in the upstream documentation . This operation can take a longer time than in case of a normal shutdown. mysql -uroot -p -e "SET GLOBAL innodb_fast_shutdown = 0" Stop the rh-mariadb103 server: systemctl stop rh-mariadb103-mariadb.service Install the rh-mariadb105 Software Collection, including the subpackage providing the mysql_upgrade utility: yum install rh-mariadb105-mariadb-server rh-mariadb105-mariadb-server-utils Note that it is possible to install the rh-mariadb105 Software Collection while the rh-mariadb103 Software Collection is still installed because these Collections do not conflict. Inspect configuration of rh-mariadb105 , which is stored in the /etc/opt/rh/rh-mariadb105/my.cnf file and the /etc/opt/rh/rh-mariadb105/my.cnf.d/ directory. Compare it with configuration of rh-mariadb103 stored in /etc/opt/rh/rh-mariadb103/my.cnf and /etc/opt/rh/rh-mariadb103/my.cnf.d/ and adjust it if necessary. All data of the rh-mariadb103 Software Collection is stored in the /var/opt/rh/rh-mariadb103/lib/mysql/ directory unless configured differently. Copy the whole content of this directory to /var/opt/rh/rh-mariadb105/lib/mysql/ . You can move the content but remember to back up your data before you continue to upgrade. Make sure the data is owned by the mysql user and SELinux context is correct. Start the rh-mariadb105 database server: systemctl start rh-mariadb105-mariadb.service Perform the data migration. Note that running the mysql_upgrade command is required due to upstream changes introduced in MDEV-14637 . scl enable rh-mariadb105 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password: scl enable rh-mariadb105 -- mysql_upgrade -p Note that when the rh-mariadb105*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb103 and rh-mysql80 Software Collections. 5.2. Migrating to MySQL 8.0 The rh-mysql80 Software Collection is available for Red Hat Enterprise Linux 7, which includes MariaDB 5.5 as the default MySQL implementation. The rh-mysql80 Software Collection conflicts neither with the mysql or mariadb packages from the core systems nor with the rh-mysql* or rh-mariadb* Software Collections, unless the *-syspaths packages are installed (see below). It is also possible to run multiple versions at the same time; however, the port number and the socket in the my.cnf files need to be changed to prevent these specific resources from conflicting. Note that it is possible to upgrade to MySQL 8.0 only from MySQL 5.7 . If you need to upgrade from an earlier version, upgrade to MySQL 5.7 first. For instructions, see Migration to MySQL 5.7 . 5.2.1. Notable Differences Between MySQL 5.7 and MySQL 8.0 Differences Specific to the rh-mysql80 Software Collection The MySQL 8.0 server provided by the rh-mysql80 Software Collection is configured to use mysql_native_password as the default authentication plug-in because client tools and libraries in Red Hat Enterprise Linux 7 are incompatible with the caching_sha2_password method, which is used by default in the upstream MySQL 8.0 version. To change the default authentication plug-in to caching_sha2_password , edit the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-default-authentication-plugin.cnf file as follows: For more information about the caching_sha2_password authentication plug-in, see the upstream documentation . The rh-mysql80 Software Collection includes the rh-mysql80-syspaths package, which installs the rh-mysql80-mysql-config-syspaths , rh-mysql80-mysql-server-syspaths , and rh-mysql80-mysql-syspaths packages. These subpackages provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-mysql80*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-mysql80* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb103 and rh-mariadb105 Software Collections. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . General Changes in MySQL 8.0 Binary logging is enabled by default during the server startup. The log_bin system variable is now set to ON by default even if the --log-bin option has not been specified. To disable binary logging, specify the --skip-log-bin or --disable-log-bin option at startup. For a CREATE FUNCTION statement to be accepted, at least one of the DETERMINISTIC , NO SQL , or READS SQL DATA keywords must be specified explicitly, otherwise an error occurs. Certain features related to account management have been removed. Namely, using the GRANT statement to modify account properties other than privilege assignments, such as authentication, SSL, and resource-limit, is no longer possible. To establish the mentioned properties at account-creation time, use the CREATE USER statement. To modify these properties, use the ALTER USER statement. Certain SSL-related options have been removed on the client-side. Use the --ssl-mode=REQUIRED option instead of --ssl=1 or --enable-ssl . Use the --ssl-mode=DISABLED option instead of --ssl=0 , --skip-ssl , or --disable-ssl . Use the --ssl-mode=VERIFY_IDENTITY option instead of --ssl-verify-server-cert options. Note that these options remain unchanged on the server side. The default character set has been changed from latin1 to utf8mb4 . The utf8 character set is currently an alias for utf8mb3 but in the future, it will become a reference to utf8mb4 . To prevent ambiguity, specify utf8mb4 explicitly for character set references instead of utf8 . Setting user variables in statements other than SET has been deprecated. The log_syslog variable, which previously configured error logging to the system logs, has been removed. Certain incompatible changes to spatial data support have been introduced. The deprecated ASC or DESC qualifiers for GROUP BY clauses have been removed. To produce a given sort order, provide an ORDER BY clause. For detailed changes in MySQL 8.0 compared to earlier versions, see the upstream documentation: What Is New in MySQL 8.0 and Changes Affecting Upgrades to MySQL 8.0 . 5.2.2. Upgrading to the rh-mysql80 Software Collection Important Prior to upgrading, back-up all your data, including any MySQL databases. Install the rh-mysql80 Software Collection. yum install rh-mysql80-mysql-server Inspect the configuration of rh-mysql80 , which is stored in the /etc/opt/rh/rh-mysql80/my.cnf file and the /etc/opt/rh/rh-mysql80/my.cnf.d/ directory. Compare it with the configuration of rh-mysql57 stored in /etc/opt/rh/rh-mysql57/my.cnf and /etc/opt/rh/rh-mysql57/my.cnf.d/ and adjust it if necessary. Stop the rh-mysql57 database server, if it is still running. systemctl stop rh-mysql57-mysqld.service All data of the rh-mysql57 Software Collection is stored in the /var/opt/rh/rh-mysql57/lib/mysql/ directory. Copy the whole content of this directory to /var/opt/rh/rh-mysql80/lib/mysql/ . You can also move the content but remember to back up your data before you continue to upgrade. Start the rh-mysql80 database server. systemctl start rh-mysql80-mysqld.service Perform the data migration. scl enable rh-mysql80 mysql_upgrade If the root user has a non-empty password defined (it should have a password defined), it is necessary to call the mysql_upgrade utility with the -p option and specify the password. scl enable rh-mysql80 -- mysql_upgrade -p Note that when the rh-mysql80*-syspaths packages are installed, the scl enable command is not required. However, the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system and from the rh-mariadb103 and rh-mariadb105 Software Collections. 5.3. Migrating to PostgreSQL 13 Red Hat Software Collections 3.8 is distributed with PostgreSQL 13 , available only for Red Hat Enterprise Linux 7. The rh-postgresql13 Software Collection can be safely installed on the same machine in parallel with the base Red Hat Enterprise Linux system version of PostgreSQL or any PostgreSQL Software Collection. It is also possible to run more than one version of PostgreSQL on a machine at the same time, but you need to use different ports or IP addresses and adjust SELinux policy. The rh-postgresql13 Software Collection includes the rh-postgresql13-syspaths package, which installs packages that provide system-wide wrappers for binaries, scripts, manual pages, and other. After installing the rh-postgresql13*-syspaths packages, users are not required to use the scl enable command for correct functioning of the binaries and scripts provided by the rh-postgresql13* packages. Note that the *-syspaths packages conflict with the corresponding packages from the base Red Hat Enterprise Linux system. To find out more about syspaths , see the Red Hat Software Collections Packaging Guide . Important Before migrating to PostgreSQL 13 , see the upstream compatibility notes for PostgreSQL 13 . In case of upgrading the PostgreSQL database in a container, see the container-specific instructions . The following table provides an overview of different paths in a Red Hat Enterprise Linux 7 system version of PostgreSQL provided by the postgresql package, and in the rh-postgresql12 and rh-postgresql13 Software Collections. Table 5.1. Diferences in the PostgreSQL paths Content postgresql rh-postgresql12 rh-postgresql13 Executables /usr/bin/ /opt/rh/rh-postgresql12/root/usr/bin/ /opt/rh/rh-postgresql13/root/usr/bin/ Libraries /usr/lib64/ /opt/rh/rh-postgresql12/root/usr/lib64/ /opt/rh/rh-postgresql13/root/usr/lib64/ Documentation /usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql/html/ /opt/rh/rh-postgresql13/root/usr/share/doc/postgresql/html/ PDF documentation /usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-docs/ /opt/rh/rh-postgresql13/root/usr/share/doc/postgresql-docs/ Contrib documentation /usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql12/root/usr/share/doc/postgresql-contrib/ /opt/rh/rh-postgresql13/root/usr/share/doc/postgresql-contrib/ Source not installed not installed not installed Data /var/lib/pgsql/data/ /var/opt/rh/rh-postgresql12/lib/pgsql/data/ /var/opt/rh/rh-postgresql13/lib/pgsql/data/ Backup area /var/lib/pgsql/backups/ /var/opt/rh/rh-postgresql12/lib/pgsql/backups/ /var/opt/rh/rh-postgresql13/lib/pgsql/backups/ Templates /usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ /opt/rh/rh-postgresql13/root/usr/share/pgsql/ Procedural Languages /usr/lib64/pgsql/ /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/ /opt/rh/rh-postgresql13/root/usr/lib64/pgsql/ Development Headers /usr/include/pgsql/ /opt/rh/rh-postgresql12/root/usr/include/pgsql/ /opt/rh/rh-postgresql13/root/usr/include/pgsql/ Other shared data /usr/share/pgsql/ /opt/rh/rh-postgresql12/root/usr/share/pgsql/ /opt/rh/rh-postgresql13/root/usr/share/pgsql/ Regression tests /usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql12/root/usr/lib64/pgsql/test/regress/ (in the -test package) /opt/rh/rh-postgresql13/root/usr/lib64/pgsql/test/regress/ (in the -test package) 5.3.1. Migrating from a Red Hat Enterprise Linux System Version of PostgreSQL to the PostgreSQL 13 Software Collection Red Hat Enterprise Linux 7 is distributed with PostgreSQL 9.2 . To migrate your data from a Red Hat Enterprise Linux system version of PostgreSQL to the rh-postgresql13 Software Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from a Red Hat Enterprise Linux system version of PostgreSQL to PostgreSQL 13, make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/lib/pgsql/data/ directory. Procedure 5.1. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop postgresql.service To verify that the server is not running, type: systemctl status postgresql.service Verify that the old directory /var/lib/pgsql/data/ exists: file /var/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql13/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql13/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 13 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql13/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql13 -- postgresql-setup --upgrade Alternatively, you can use the /opt/rh/rh-postgresql13/root/usr/bin/postgresql-setup --upgrade command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql13-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql13-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql13 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 13 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 13 server, type as root : chkconfig rh-postgresql13-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql13/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.2. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'pg_dumpall > ~/pgdump_file.sql' Stop the old server by running the following command as root : systemctl stop postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql13 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql13-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql13 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 13 server to start automatically at boot time. To disable the old system PostgreSQL server, type the following command as root : chkconfig postgresql off To enable the PostgreSQL 13 server, type as root : chkconfig rh-postgresql13-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql13/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.3.2. Migrating from the PostgreSQL 12 Software Collection to the PostgreSQL 13 Software Collection To migrate your data from the rh-postgresql12 Software Collection to the rh-postgresql13 Collection, you can either perform a fast upgrade using the pg_upgrade tool (recommended), or dump the database data into a text file with SQL commands and import it in the new database. Note that the second method is usually significantly slower and may require manual fixes; see the PostgreSQL documentation for more information about this upgrade method. Important Before migrating your data from PostgreSQL 12 to PostgreSQL 13 , make sure that you back up all your data, including the PostgreSQL database files, which are by default located in the /var/opt/rh/rh-postgresql12/lib/pgsql/data/ directory. Procedure 5.3. Fast Upgrade Using the pg_upgrade Tool To perform a fast upgrade of your PostgreSQL server, complete the following steps: Stop the old PostgreSQL server to ensure that the data is not in an inconsistent state. To do so, type the following at a shell prompt as root : systemctl stop rh-postgresql12-postgresql.service To verify that the server is not running, type: systemctl status rh-postgresql12-postgresql.service Verify that the old directory /var/opt/rh/rh-postgresql12/lib/pgsql/data/ exists: file /var/opt/rh/rh-postgresql12/lib/pgsql/data/ and back up your data. Verify that the new data directory /var/opt/rh/rh-postgresql13/lib/pgsql/data/ does not exist: file /var/opt/rh/rh-postgresql13/lib/pgsql/data/ If you are running a fresh installation of PostgreSQL 13 , this directory should not be present in your system. If it is, back it up by running the following command as root : mv /var/opt/rh/rh-postgresql13/lib/pgsql/data{,-scl-backup} Upgrade the database data for the new server by running the following command as root : scl enable rh-postgresql13 -- postgresql-setup --upgrade --upgrade-from=rh-postgresql12-postgresql Alternatively, you can use the /opt/rh/rh-postgresql13/root/usr/bin/postgresql-setup --upgrade --upgrade-from=rh-postgresql12-postgresql command. Note that you can use the --upgrade-from option for upgrading from different versions of PostgreSQL . The list of possible upgrade scenarios is available using the --upgrade-ids option. It is recommended that you read the resulting /var/lib/pgsql/upgrade_rh-postgresql13-postgresql.log log file to find out if any problems occurred during the upgrade. Start the new server as root : systemctl start rh-postgresql13-postgresql.service It is also advised that you run the analyze_new_cluster.sh script as follows: su - postgres -c 'scl enable rh-postgresql13 ~/analyze_new_cluster.sh' Optionally, you can configure the PostgreSQL 13 server to start automatically at boot time. To disable the old PostgreSQL 12 server, type the following command as root : chkconfig rh-postgresql12-postgreqsql off To enable the PostgreSQL 13 server, type as root : chkconfig rh-postgresql13-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql13/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. Procedure 5.4. Performing a Dump and Restore Upgrade To perform a dump and restore upgrade of your PostgreSQL server, complete the following steps: Ensure that the old PostgreSQL server is running by typing the following at a shell prompt as root : systemctl start rh-postgresql12-postgresql.service Dump all data in the PostgreSQL database into a script file. As root , type: su - postgres -c 'scl enable rh-postgresql12 "pg_dumpall > ~/pgdump_file.sql"' Stop the old server by running the following command as root : systemctl stop rh-postgresql12-postgresql.service Initialize the data directory for the new server as root : scl enable rh-postgresql13 -- postgresql-setup initdb Start the new server as root : systemctl start rh-postgresql13-postgresql.service Import data from the previously created SQL file: su - postgres -c 'scl enable rh-postgresql13 "psql -f ~/pgdump_file.sql postgres"' Optionally, you can configure the PostgreSQL 13 server to start automatically at boot time. To disable the old PostgreSQL 12 server, type the following command as root : chkconfig rh-postgresql12-postgresql off To enable the PostgreSQL 13 server, type as root : chkconfig rh-postgresql13-postgresql on If your configuration differs from the default one, make sure to update configuration files, especially the /var/opt/rh/rh-postgresql13/lib/pgsql/data/pg_hba.conf configuration file. Otherwise only the postgres user will be allowed to access the database. 5.4. Migrating to nginx 1.20 The root directory for the rh-nginx120 Software Collection is located in /opt/rh/rh-nginx120/root/ . The error log is stored in /var/opt/rh/rh-nginx120/log/nginx by default. Configuration files are stored in the /etc/opt/rh/rh-nginx120/nginx/ directory. Configuration files in nginx 1.20 have the same syntax and largely the same format as nginx Software Collections. Configuration files (with a .conf extension) in the /etc/opt/rh/rh-nginx120/nginx/default.d/ directory are included in the default server block configuration for port 80 . Important Before upgrading from nginx 1.18 to nginx 1.20 , back up all your data, including web pages located in the /opt/rh/nginx118/root/ tree and configuration files located in the /etc/opt/rh/nginx118/nginx/ tree. If you have made any specific changes, such as changing configuration files or setting up web applications, in the /opt/rh/nginx118/root/ tree, replicate those changes in the new /opt/rh/rh-nginx120/root/ and /etc/opt/rh/rh-nginx120/nginx/ directories, too. You can use this procedure to upgrade directly from nginx 1.16 to nginx 1.20 . Use the appropriate paths in this case. For the official nginx documentation, refer to http://nginx.org/en/docs/ . 5.5. Migrating to Redis 6 Redis 5.0 , provided by the rh-redis5 Software Collection, is mostly a strict subset of Redis 6.0 . Therefore, no major issues should occur when upgrading from version 5.0 to version 6.0. To upgrade a Redis Cluster to version 6.0, a mass restart of all the instances is needed. Compatibility Notes When a set key does not exist, the SPOP <count> command no longer returns null. In Redis 6 , the command returns an empty set in this scenario, similar to a situation when it is called with a 0 argument. For minor non-backward compatible changes, see the upstream release notes .
[ "[mysqld] default_authentication_plugin=caching_sha2_password" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.8_release_notes/chap-Migration
Chapter 4. Recovering applications with RWO storage
Chapter 4. Recovering applications with RWO storage Applications that use ReadWriteOnce (RWO) storage have a known behavior described in this Kubernetes issue . Because of this issue, if there is a data zone failure, any application pods in that zone mounting RWO volumes (for example, cephrbd based volumes) are stuck with Terminating status after 6-8 minutes and is not re-created on the active zone without manual intervention. Check the OpenShift Container Platform nodes with a status of NotReady . There may be an issue that prevents the nodes from communicating with the OpenShift control plane. However, the nodes may still be performing I/O operations against Persistent Volumes (PVs). If two pods are concurrently writing to the same RWO volume, there is a risk of data corruption. Ensure that processes on the NotReady node are either terminated or blocked until they are terminated. Example solutions: Use an out of band management system to power off a node, with confirmation, to ensure process termination. Withdraw a network route that is used by nodes at a failed site to communicate with storage. Note Before restoring service to the failed zone or nodes, confirm that all the pods with PVs have terminated successfully. To get the Terminating pods to recreate on the active zone, you can either force delete the pod or delete the finalizer on the associated PV. Once one of these two actions are completed, the application pod should recreate on the active zone and successfully mount its RWO storage. Force deleting the pod Force deletions do not wait for confirmation from the kubelet that the pod has been terminated. <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Deleting the finalizer on the associated PV Find the associated PV for the Persistent Volume Claim (PVC) that is mounted by the Terminating pod and delete the finalizer using the oc patch command. <PV_NAME> Is the name of the PV An easy way to find the associated PV is to describe the Terminating pod. If you see a multi-attach warning, it should have the PV names in the warning (for example, pvc-0595a8d2-683f-443b-aee0-6e547f5f5a7c ). <PODNAME> Is the name of the pod <NAMESPACE> Is the project namespace Example output:
[ "oc delete pod <PODNAME> --grace-period=0 --force --namespace <NAMESPACE>", "oc patch -n openshift-storage pv/ <PV_NAME> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge", "oc describe pod <PODNAME> --namespace <NAMESPACE>", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 4m5s default-scheduler Successfully assigned openshift-storage/noobaa-db-pg-0 to perf1-mz8bt-worker-d2hdm Warning FailedAttachVolume 4m5s attachdetach-controller Multi-Attach error for volume \"pvc-0595a8d2-683f-443b-aee0-6e547f5f5a7c\" Volume is already exclusively attached to one node and can't be attached to another" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/recovering_a_metro-dr_stretch_cluster/recovering-applications-with-rwo-storage
Chapter 6. Unmanaged KIE Server
Chapter 6. Unmanaged KIE Server An unmanaged KIE Server is a standalone instance, and therefore must be configured individually using REST/JMS API from KIE Server itself. The configuration is automatically persisted by the server into a file and that is used as the internal server state, in case of restarts. The configuration is updated during the following operations: Deploy KIE container Undeploy KIE container Start KIE container Stop KIE container Note If KIE Server is restarted, it will attempt to re-establish the same state that was persisted before shutdown. Therefore, KIE containers (deployment units) that were running will be started, but the ones that were stopped will not.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/managing_red_hat_decision_manager_and_kie_server_settings/kie-server-unmanaged-server-config-proc
Access management and authentication
Access management and authentication Red Hat Ansible Automation Platform 2.5 Configure role based access control, authenticators and authenticator maps in Ansible Automation Platform Red Hat Customer Content Services
[ "sudo subscription-manager register --username <USDINSERT_USERNAME_HERE> --password <USDINSERT_PASSWORD_HERE>", "sudo subscription-manager list --available --all | grep \"Ansible Automation Platform\" -B 3 -A 6", "Subscription Name: Red Hat Ansible Automation, Premium (5000 Managed Nodes) Provides: Red Hat Ansible Engine Red Hat Ansible Automation Platform SKU: MCT3695 Contract: ```` Pool ID: <pool_id> Provides Management: No Available: 4999 Suggested: 1", "sudo subscription-manager attach --pool=<pool_id>", "sudo subscription-manager remove --pool=<pool_id>", "sudo subscription-manager list --consumed", "sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms", "sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms", "CN=josie,CN=users,DC=website,DC=com", "{\"name_attr\": \"cn\", \"member_attr\": \"member\"}", "openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes", "GET_ALL_EXTRA_DATA: true", "{ \"en-US\": { \"url\": \"http://www.example.com\", \"displayname\": \"Example\", \"name\": \"example\" } }", "{ \"givenName\": \"Some User\", \"emailAddress\": \"[email protected]\" }", "{ \"givenName\": \"Some User\", \"emailAddress\": \"[email protected]\" }", "{ \"sign_request\": True, }", "// Indicates whether the <samlp:AuthnRequest> messages sent by this SP // will be signed. [Metadata of the SP will offer this info] \"authnRequestsSigned\": false, // Indicates a requirement for the <samlp:Response>, <samlp:LogoutRequest> // and <samlp:LogoutResponse> elements received by this SP to be signed. \"wantMessagesSigned\": false, // Indicates a requirement for the <saml:Assertion> elements received by // this SP to be signed. [Metadata of the SP will offer this info] \"wantAssertionsSigned\": false, // Authentication context. // Set to false and no AuthContext will be sent in the AuthNRequest, // Set true or don't present this parameter and you will get an AuthContext 'exact' 'urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport' // Set an array with the possible auth context values: array ('urn:oasis:names:tc:SAML:2.0:ac:classes:Password', 'urn:oasis:names:tc:SAML:2.0:ac:classes:X509'), \"requestedAuthnContext\": true,", "- Department - UserType - Organization", "CORS_ORIGIN_ALLOW_ALL = True CORS_ALLOWED_ORIGIN_REGEXES = [ r\"http://django-oauth-toolkit.herokuapp.com*\", r\"http://www.example.com*\" ]", "{ \"id\": 35, \"type\": \"access_token\", \"user\": 1, \"token\": \"omMFLk7UKpB36WN2Qma9H3gbwEBSOc\", \"refresh_token\": \"AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z\", \"application\": 6, \"expires\": \"2017-12-06T03:46:17.087022Z\", \"scope\": \"read write\" }", "curl -X POST -d \"grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z\" -u \"gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo\" http://<gateway>/o/token/ -i", "HTTP/1.1 200 OK Server: nginx/1.12.2 Date: Tue, 05 Dec 2017 17:54:06 GMT Content-Type: application/json Content-Length: 169 Connection: keep-alive Content-Language: en Vary: Accept-Language, Cookie Pragma: no-cache Cache-Control: no-store Strict-Transport-Security: max-age=15768000 {\"access_token\": \"NDInWxGJI4iZgqpsreujjbvzCfJqgR\", \"token_type\": \"Bearer\", \"expires_in\": 315360000000, \"refresh_token\": \"DqOrmz8bx3srlHkZNKmDpqA86bnQkT\", \"scope\": \"read write\"}", "curl -X POST -d \"token=rQONsve372fQwuc2pn76k3IHDCYpi7\" -u \"gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo\" http://<gateway>/o/revoke_token/ -i", "aap-gateway-manage create_oauth2_token --user example_user New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2", "aap-gateway-manage revoke_oauth2_tokens", "aap-gateway-manage revoke_oauth2_tokens --revoke_refresh", "aap-gateway-manage revoke_oauth2_tokens --user example_user", "aap-gateway-manage revoke_oauth2_tokens --user example_user --revoke_refresh", "You have already reached the maximum number of 1 hosts allowed for your organization. Contact your System Administrator for assistance." ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/access_management_and_authentication/index
Chapter 6. Maintenance procedures
Chapter 6. Maintenance procedures 6.1. Update RHEL and the RHEL HA Add-On Please refer to Recommendations: Applying package updates in a RHEL High Availability cluster , for more information. Note For two-node cluster setups it is not necessary to manually move the resources to the other HA cluster node before placing the HA cluster node in standby mode (putting the HA cluster node in standby mode will take care of moving or stopping the resources running on the HA cluster node based on the HA cluster configuration). Also, to minimize downtime of the SAP system it is recommended to first update the HA cluster node running the "less critical" resources, like the ERS instance. When the HA cluster node has been updated and it has been verified that the resources that were running on the node before the update was started are running again, the other HA cluster node running the "critical" resources, like the (A)SCS instance, can be updated as well.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/asmb_maintain_proc_v8-configuring-clusters-to-manage
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting comments on specific passages View the documentation in the HTML format and ensure that you see the Feedback button in the upper right corner after the page fully loads. Use your cursor to highlight the part of the text that you want to comment on. Click the Add Feedback button that appears near the highlighted text. Add your feedback and click Submit . Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_llvm_16.0.6_toolset/proc_providing-feedback-on-red-hat-documentation_using-llvm-toolset
Chapter 113. Setting up Samba on an IdM domain member
Chapter 113. Setting up Samba on an IdM domain member You can set up Samba on a host that is joined to a Red Hat Identity Management (IdM) domain. Users from IdM and also, if available, from trusted Active Directory (AD) domains, can access shares and printer services provided by Samba. Important Using Samba on an IdM domain member is an unsupported Technology Preview feature and contains certain limitations. For example, IdM trust controllers do not support the Active Directory Global Catalog service, and they do not support resolving IdM groups using the Distributed Computing Environment / Remote Procedure Calls (DCE/RPC) protocols. As a consequence, AD users can only access Samba shares and printers hosted on IdM clients when logged in to other IdM clients; AD users logged into a Windows machine can not access Samba shares hosted on an IdM domain member. Customers deploying Samba on IdM domain members are encouraged to provide feedback to Red Hat. If users from AD domains need to access shares and printer services provided by Samba, ensure the AES encryption type is enabled is AD. For more information, see Enabling the AES encryption type in Active Directory using a GPO . Prerequisites The host is joined as a client to the IdM domain. Both the IdM servers and the client must run on RHEL 8.1 or later. 113.1. Preparing the IdM domain for installing Samba on domain members Before you can set up Samba on an IdM client, you must prepare the IdM domain using the ipa-adtrust-install utility on an IdM server. Note Any system where you run the ipa-adtrust-install command automatically becomes an AD trust controller. However, you must run ipa-adtrust-install only once on an IdM server. Prerequisites IdM server is installed. You have root privileges to install packages and restart IdM services. Procedure Install the required packages: Authenticate as the IdM administrative user: Run the ipa-adtrust-install utility: The DNS service records are created automatically if IdM was installed with an integrated DNS server. If you installed IdM without an integrated DNS server, ipa-adtrust-install prints a list of service records that you must manually add to DNS before you can continue. The script prompts you that the /etc/samba/smb.conf already exists and will be rewritten: The script prompts you to configure the slapi-nis plug-in, a compatibility plug-in that allows older Linux clients to work with trusted users: When prompted, enter the NetBIOS name for the IdM domain or press Enter to accept the name suggested: You are prompted to run the SID generation task to create a SID for any existing users: This is a resource-intensive task, so if you have a high number of users, you can run this at another time. Optional: By default, the Dynamic RPC port range is defined as 49152-65535 for Windows Server 2008 and later. If you need to define a different Dynamic RPC port range for your environment, configure Samba to use different ports and open those ports in your firewall settings. The following example sets the port range to 55000-65000 . Restart the ipa service: Use the smbclient utility to verify that Samba responds to Kerberos authentication from the IdM side: 113.2. Installing and configuring a Samba server on an IdM client You can install and configure Samba on a client enrolled in an IdM domain. Prerequisites Both the IdM servers and the client must run on RHEL 8.1 or later. The IdM domain is prepared as described in Preparing the IdM domain for installing Samba on domain members . If IdM has a trust configured with AD, enable the AES encryption type for Kerberos. For example, use a group policy object (GPO) to enable the AES encryption type. For details, see Enabling AES encryption in Active Directory using a GPO . Procedure Install the ipa-client-samba package: Use the ipa-client-samba utility to prepare the client and create an initial Samba configuration: By default, ipa-client-samba automatically adds the [homes] section to the /etc/samba/smb.conf file that dynamically shares a user's home directory when the user connects. If users do not have home directories on this server, or if you do not want to share them, remove the following lines from /etc/samba/smb.conf : Share directories and printers. For details, see the following sections: Setting up a Samba file share that uses POSIX ACLs Setting up a share that uses Windows ACLs Setting up Samba as a print server Open the ports required for a Samba client in the local firewall: Enable and start the smb and winbind services: Verification Run the following verification step on a different IdM domain member that has the samba-client package installed: List the shares on the Samba server using Kerberos authentication: Additional resources ipa-client-samba(1) man page on your system 113.3. Manually adding an ID mapping configuration if IdM trusts a new domain Samba requires an ID mapping configuration for each domain from which users access resources. On an existing Samba server running on an IdM client, you must manually add an ID mapping configuration after the administrator added a new trust to an Active Directory (AD) domain. Prerequisites You configured Samba on an IdM client. Afterward, a new trust was added to IdM. The DES and RC4 encryption types for Kerberos must be disabled in the trusted AD domain. For security reasons, RHEL 8 does not support these weak encryption types. Procedure Authenticate using the host's keytab: Use the ipa idrange-find command to display both the base ID and the ID range size of the new domain. For example, the following command displays the values for the ad.example.com domain: You need the values from the ipabaseid and ipaidrangesize attributes in the steps. To calculate the highest usable ID, use the following formula: With the values from the step, the highest usable ID for the ad.example.com domain is 1918599999 (1918400000 + 200000 - 1). Edit the /etc/samba/smb.conf file, and add the ID mapping configuration for the domain to the [global] section: Specify the value from ipabaseid attribute as the lowest and the computed value from the step as the highest value of the range. Restart the smb and winbind services: Verification List the shares on the Samba server using Kerberos authentication: 113.4. Additional resources Installing an Identity Management client
[ "yum install ipa-server-trust-ad samba-client", "kinit admin", "ipa-adtrust-install", "WARNING: The smb.conf already exists. Running ipa-adtrust-install will break your existing Samba configuration. Do you wish to continue? [no]: yes", "Do you want to enable support for trusted domains in Schema Compatibility plugin? This will allow clients older than SSSD 1.9 and non-Linux clients to work with trusted users. Enable trusted domains support in slapi-nis? [no]: yes", "Trust is configured but no NetBIOS domain name found, setting it now. Enter the NetBIOS name for the IPA domain. Only up to 15 uppercase ASCII letters, digits and dashes are allowed. Example: EXAMPLE. NetBIOS domain name [IDM]:", "Do you want to run the ipa-sidgen task? [no]: yes", "net conf setparm global 'rpc server dynamic port range' 55000-65000 firewall-cmd --add-port=55000-65000/tcp firewall-cmd --runtime-to-permanent", "ipactl restart", "smbclient -L ipaserver.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- IPCUSD IPC IPC Service (Samba 4.15.2)", "yum install ipa-client-samba", "ipa-client-samba Searching for IPA server IPA server: DNS discovery Chosen IPA master: idm_server.idm.example.com SMB principal to be created: cifs/ idm_client.idm.example.com @ IDM.EXAMPLE.COM NetBIOS name to be used: IDM_CLIENT Discovered domains to use: Domain name: idm.example.com NetBIOS name: IDM SID: S-1-5-21-525930803-952335037-206501584 ID range: 212000000 - 212199999 Domain name: ad.example.com NetBIOS name: AD SID: None ID range: 1918400000 - 1918599999 Continue to configure the system with these values? [no]: yes Samba domain member is configured. Please check configuration at /etc/samba/smb.conf and start smb and winbind services", "[homes] read only = no", "firewall-cmd --permanent --add-service=samba-client firewall-cmd --reload", "systemctl enable --now smb winbind", "smbclient -L idm_client.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- example Disk IPCUSD IPC IPC Service (Samba 4.15.2)", "kinit -k", "ipa idrange-find --name=\" AD.EXAMPLE.COM _id_range\" --raw --------------- 1 range matched --------------- cn: AD.EXAMPLE.COM _id_range ipabaseid: 1918400000 ipaidrangesize: 200000 ipabaserid: 0 ipanttrusteddomainsid: S-1-5-21-968346183-862388825-1738313271 iparangetype: ipa-ad-trust ---------------------------- Number of entries returned 1 ----------------------------", "maximum_range = ipabaseid + ipaidrangesize - 1", "idmap config AD : range = 1918400000 - 1918599999 idmap config AD : backend = sss", "systemctl restart smb winbind", "smbclient -L idm_client.idm.example.com -U user_name --use-kerberos=required lp_load_ex: changing to config backend registry Sharename Type Comment --------- ---- ------- example Disk IPCUSD IPC IPC Service (Samba 4.15.2)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/setting-up-samba-on-an-idm-domain-member_configuring-and-managing-idm
Chapter 1. RBAC APIs
Chapter 1. RBAC APIs 1.1. ClusterRoleBinding [rbac.authorization.k8s.io/v1] Description ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Type object 1.2. ClusterRole [rbac.authorization.k8s.io/v1] Description ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Type object 1.3. RoleBinding [rbac.authorization.k8s.io/v1] Description RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Type object 1.4. Role [rbac.authorization.k8s.io/v1] Description Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/rbac_apis/rbac-apis
Chapter 7. Advisories related to this release
Chapter 7. Advisories related to this release The following advisories have been issued to document enhancements, bug fixes, and CVE fixes included in this release. RHEA-2024:9283 RHEA-2024:11235 RHEA-2025:0734
null
https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/release_notes_for_node.js_22/advisories-related-to-current-release-nodejs
Appendix A. Example Configuration: Load Balancing Ceph Object Gateway Servers with HAProxy and Keepalived
Appendix A. Example Configuration: Load Balancing Ceph Object Gateway Servers with HAProxy and Keepalived This appendix provides an example showing the configuration of HAProxy and Keepalived with a Ceph cluster. The Ceph Object Gateway allows you to assign many instances of the object gateway to a single zone so that you can scale out as load increases. Since each object gateway instance has its own IP address, you can use HAProxy and keepalived to balance the load across Ceph Object Gateway servers. In this configuration, HAproxy performs the load balancing across Ceph Object Gateway servers while Keepalived is used to manage the Virtual IP addresses of the Ceph Object Gateway servers and to monitor HAProxy. Another use case for HAProxy and keepalived is to terminate HTTPS at the HAProxy server. Red Hat Ceph Storage (RHCS) 1.3.x uses Civetweb, and the implementation in RHCS 1.3.x does not support HTTPS. You can use an HAProxy server to terminate HTTPS at the HAProxy server and use HTTP between the HAProxy server and the Civetweb gateway instances. This example includes this configuration as part of the procedure. A.1. Prerequisites To set up HAProxy with the Ceph Object Gateway, you must have: A running Ceph cluster; At least two Ceph Object Gateway servers within the same zone configured to run on port 80; At least two servers for HAProxy and keepalived. Note This procedure assumes that you have at least two Ceph Object Gateway servers running, and that you get a valid response when running test scripts over port 80.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/ceph_example
19.3. Services Configuration Tool
19.3. Services Configuration Tool The Services Configuration Tool is a graphical application developed by Red Hat, Inc to configure which SysV services in the /etc/rc.d/init.d directory are started at boot time (for runlevels 3, 4, and 5) and which xinetd services are enabled. It also allows you to start, stop, and restart SysV services as well as restart xinetd . To start the Services Configuration Tool from the desktop, go to the Main Menu Button (on the Panel) => System Settings => Server Settings => Services or type the command system-config-services at a shell prompt (for example, in an XTerm or a GNOME terminal ). Figure 19.1. Services Configuration Tool The Services Configuration Tool displays the current runlevel as well as the runlevel you are currently editing. To edit a different runlevel, select Edit Runlevel from the pulldown menu and select runlevel 3, 4, or 5. Refer to Section 19.1, "Runlevels" for a description of runlevels. The Services Configuration Tool lists the services from the /etc/rc.d/init.d directory as well as the services controlled by xinetd . Click on the name of the service from the list on the left-hand side of the application to display a brief description of that service as well as the status of the service. If the service is not an xinetd service, the status window shows whether the service is currently running. If the service is controlled by xinetd , the status window displays the phrase xinetd service . To start, stop, or restart a service immediately, select the service from the list and click the appropriate button on the toolbar (or choose the action from the Actions pulldown menu). If the service is an xinetd service, the action buttons are disabled because they can not be started or stopped individually. If you enable/disable an xinetd service by checking or unchecking the checkbox to the service name, you must select File => Save Changes from the pulldown menu to restart xinetd and immediately enable/disable the xinetd service that you changed. xinetd is also configured to remember the setting. You can enable/disable multiple xinetd services at a time and save the changes when you are finished. For example, assume you check rsync to enable it in runlevel 3 and then save the changes. The rsync service is immediately enabled. The time xinetd is started, rsync is still enabled. Warning When you save changes to xinetd services, xinetd is restarted, and the changes take place immediately. When you save changes to other services, the runlevel is reconfigured, but the changes do not take effect immediately. To enable a non- xinetd service to start at boot time for the currently selected runlevel, check the checkbox beside the name of the service in the list. After configuring the runlevel, apply the changes by selecting File => Save Changes from the pulldown menu. The runlevel configuration is changed, but the runlevel is not restarted; thus, the changes do not take place immediately. For example, assume you are configuring runlevel 3. If you change the value for the httpd service from checked to unchecked and then select Save Changes , the runlevel 3 configuration changes so that httpd is not started at boot time. However, runlevel 3 is not reinitialized, so httpd is still running. Select one of following options at this point: Stop the httpd service - Stop the service by selecting it from the list and clicking the Stop button. A message appears stating that the service was stopped successfully. Reinitialize the runlevel - Reinitialize the runlevel by going to a shell prompt and typing the command telinit 3 (where 3 is the runlevel number). This option is recommended if you change the Start at Boot value of multiple services and want to activate the changes immediately. Do nothing else - You do not have to stop the httpd service. You can wait until the system is rebooted for the service to stop. The time the system is booted, the runlevel is initialized without the httpd service running. To add a service to a runlevel, select the runlevel from the Edit Runlevel pulldown menu, and then select Actions => Add Service . To delete a service from a runlevel, select the runlevel from the Edit Runlevel pulldown menu, select the service to be deleted from the list on the left, and select Actions => Delete Service .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/controlling_access_to_services-rhservicestool
Chapter 9. Clair configuration overview
Chapter 9. Clair configuration overview Clair is configured by a structured YAML file. Each Clair node needs to specify what mode it will run in and a path to a configuration file through CLI flags or environment variables. For example: USD clair -conf ./path/to/config.yaml -mode indexer or USD clair -conf ./path/to/config.yaml -mode matcher The aforementioned commands each start two Clair nodes using the same configuration file. One runs the indexing facilities, while other runs the matching facilities. If you are running Clair in combo mode, you must supply the indexer, matcher, and notifier configuration blocks in the configuration. 9.1. Information about using Clair in a proxy environment Environment variables respected by the Go standard library can be specified if needed, for example: HTTP_PROXY USD export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port> HTTPS_PROXY . USD export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port> SSL_CERT_DIR USD export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates> NO_PROXY USD export NO_PROXY=<comma_separated_list_of_hosts_and_domains> If you are using a proxy server in your environment with Clair's updater URLs, you must identify which URL needs to be added to the proxy allowlist to ensure that Clair can access them unimpeded. For example, the osv updater requires access to https://osv-vulnerabilities.storage.googleapis.com to fetch ecosystem data dumps. In this scenario, the URL must be added to the proxy allowlist. For a full list of updater URLs, see "Clair updater URLs". You must also ensure that the standard Clair URLs are added to the proxy allowlist: https://search.maven.org/solrsearch/select https://catalog.redhat.com/api/containers/ https://access.redhat.com/security/data/metrics/repository-to-cpe.json https://access.redhat.com/security/data/metrics/container-name-repos-map.json When configuring the proxy server, take into account any authentication requirements or specific proxy settings needed to enable seamless communication between Clair and these URLs. By thoroughly documenting and addressing these considerations, you can ensure that Clair functions effectively while routing its updater traffic through the proxy. 9.2. Clair configuration reference The following YAML shows an example Clair configuration: http_listen_addr: "" introspection_addr: "" log_level: "" tls: {} indexer: connstring: "" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: "" indexer_addr: "" migrations: false period: "" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: "" migrations: false indexer_addr: "" matcher_addr: "" poll_interval: "" delivery_interval: "" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: "" probability: null jaeger: agent: endpoint: "" collector: endpoint: "" username: null password: null service_name: "" tags: nil buffer_max: 0 metrics: name: "" prometheus: endpoint: null dogstatsd: url: "" Note The above YAML file lists every key for completeness. Using this configuration file as-is will result in some options not having their defaults set normally. 9.3. Clair general fields The following table describes the general configuration fields available for a Clair deployment. Field Typhttp_listen_ae Description http_listen_addr String Configures where the HTTP API is exposed. Default: :6060 introspection_addr String Configures where Clair's metrics and health endpoints are exposed. log_level String Sets the logging level. Requires one of the following strings: debug-color , debug , info , warn , error , fatal , panic tls String A map containing the configuration for serving the HTTP API of TLS/SSL and HTTP/2. .cert String The TLS certificate to be used. Must be a full-chain certificate. Example configuration for general Clair fields The following example shows a Clair configuration. Example configuration for general Clair fields # ... http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info # ... 9.4. Clair indexer configuration fields The following table describes the configuration fields for Clair's indexer component. Field Type Description indexer Object Provides Clair indexer node configuration. .airgap Boolean Disables HTTP access to the internet for indexers and fetchers. Private IPv4 and IPv6 addresses are allowed. Database connections are unaffected. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .index_report_request_concurrency Integer Rate limits the number of index report creation requests. Setting this to 0 attemps to auto-size this value. Setting a negative value means unlimited. The auto-sizing is a multiple of the number of available cores. The API returns a 429 status code if concurrency is exceeded. .scanlock_retry Integer A positive integer representing seconds. Concurrent indexers lock on manifest scans to avoid clobbering. This value tunes how often a waiting indexer polls for the lock. .layer_scan_concurrency Integer Positive integer limiting the number of concurrent layer scans. Indexers will match a manifest's layer concurrently. This value tunes the number of layers an indexer scans in parallel. .migrations Boolean Whether indexer nodes handle migrations to their database. .scanner String Indexer configuration. Scanner allows for passing configuration options to layer scanners. The scanner will have this configuration pass to it on construction if designed to do so. .scanner.dist String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.package String A map with the name of a particular scanner and arbitrary YAML as a value. .scanner.repo String A map with the name of a particular scanner and arbitrary YAML as a value. Example indexer configuration The following example shows a hypothetical indexer configuration for Clair. Example indexer configuration # ... indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true # ... 9.5. Clair matcher configuration fields The following table describes the configuration fields for Clair's matcher component. Note Differs from matchers configuration fields. Field Type Description matcher Object Provides Clair matcher node configuration. .cache_age String Controls how long users should be hinted to cache responses for. .connstring String A Postgres connection string. Accepts format as a URL or libpq connection string. .max_conn_pool Integer Limits the database connection pool size. Clair allows for a custom connection pool size. This number directly sets how many active database connections are allowed concurrently. This parameter will be ignored in a future version. Users should configure this through the connection string. .indexer_addr String A matcher contacts an indexer to create a vulnerability report. The location of this indexer is required. Defaults to 30m . .migrations Boolean Whether matcher nodes handle migrations to their databases. .period String Determines how often updates for new security advisories take place. Defaults to 30m . .disable_updaters Boolean Whether to run background updates or not. Default: False .update_retention Integer Sets the number of update operations to retain between garbage collection cycles. This should be set to a safe MAX value based on database size constraints. Defaults to 10m . If a value of less than 0 is provided, garbage collection is disabled. 2 is the minimum value to ensure updates can be compared to notifications. Example matcher configuration Example matcher configuration # ... matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2 # ... 9.6. Clair matchers configuration fields The following table describes the configuration fields for Clair's matchers component. Note Differs from matcher configuration fields. Table 9.1. Matchers configuration fields Field Type Description matchers Array of strings Provides configuration for the in-tree matchers . .names String A list of string values informing the matcher factory about enabled matchers. If value is set to null , the default list of matchers run. The following strings are accepted: alpine-matcher , aws-matcher , debian-matcher , gobin , java-maven , oracle , photon , python , rhel , rhel-container-matcher , ruby , suse , ubuntu-matcher .config String Provides configuration to a specific matcher. A map keyed by the name of the matcher containing a sub-object which will be provided to the matchers factory constructor. For example: Example matchers configuration The following example shows a hypothetical Clair deployment that only requires only the alpine , aws , debian , oracle matchers. Example matchers configuration # ... matchers: names: - "alpine-matcher" - "aws" - "debian" - "oracle" # ... 9.7. Clair updaters configuration fields The following table describes the configuration fields for Clair's updaters component. Table 9.2. Updaters configuration fields Field Type Description updaters Object Provides configuration for the matcher's update manager. .sets String A list of values informing the update manager which updaters to run. If value is set to null , the default set of updaters runs the following: alpine , aws , clair.cvss , debian , oracle , photon , osv , rhel , rhcc suse , ubuntu If left blank, zero updaters run. .config String Provides configuration to specific updater sets. A map keyed by the name of the updater set containing a sub-object which will be provided to the updater set's constructor. For a list of the sub-objects for each updater, see "Advanced updater configuration". Example updaters configuration In the following configuration, only the rhel set is configured. The ignore_unpatched variable, which is specific to the rhel updater, is also defined. Example updaters configuration # ... updaters: sets: - rhel config: rhel: ignore_unpatched: false # ... 9.8. Clair notifier configuration fields The general notifier configuration fields for Clair are listed below. Field Type Description notifier Object Provides Clair notifier node configuration. .connstring String Postgres connection string. Accepts format as URL, or libpq connection string. .migrations Boolean Whether notifier nodes handle migrations to their database. .indexer_addr String A notifier contacts an indexer to create or obtain manifests affected by vulnerabilities. The location of this indexer is required. .matcher_addr String A notifier contacts a matcher to list update operations and acquire diffs. The location of this matcher is required. .poll_interval String The frequency at which the notifier will query a matcher for update operations. .delivery_interval String The frequency at which the notifier attempts delivery of created, or previously failed, notifications. .disable_summary Boolean Controls whether notifications should be summarized to one per manifest. Example notifier configuration The following notifier snippet is for a minimal configuration. Example notifier configuration # ... notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" headers: "" amqp: null stomp: null # ... 9.8.1. Clair webhook configuration fields The following webhook fields are available for the Clair notifier environment. Table 9.3. Clair webhook fields .webhook Object Configures the notifier for webhook delivery. .webhook.target String URL where the webhook will be delivered. .webhook.callback String The callback URL where notifications can be retrieved. The notification ID will be appended to this URL. This will typically be where the Clair notifier is hosted. .webhook.headers String A map associating a header name to a list of values. Example webhook configuration Example webhook configuration # ... notifier: # ... webhook: target: "http://webhook/" callback: "http://clair-notifier/notifier/api/v1/notifications" # ... 9.8.2. Clair amqp configuration fields The following Advanced Message Queuing Protocol (AMQP) fields are available for the Clair notifier environment. .amqp Object Configures the notifier for AMQP delivery. [NOTE] ==== Clair does not declare any AMQP components on its own. All attempts to use an exchange or queue are passive only and will fail. Broker administrators should setup exchanges and queues ahead of time. ==== .amqp.direct Boolean If true , the notifier will deliver individual notifications (not a callback) to the configured AMQP broker. .amqp.rollup Integer When amqp.direct is set to true , this value informs the notifier of how many notifications to send in a direct delivery. For example, if direct is set to true , and amqp.rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .amqp.exchange Object The AMQP exchange to connect to. .amqp.exchange.name String The name of the exchange to connect to. .amqp.exchange.type String The type of the exchange. Typically one of the following: direct , fanout , topic , headers . .amqp.exchange.durability Boolean Whether the configured queue is durable. .amqp.exchange.auto_delete Boolean Whether the configured queue uses an auto_delete_policy . .amqp.routing_key String The name of the routing key each notification is sent with. .amqp.callback String If amqp.direct is set to false , this URL is provided in the notification callback sent to the broker. This URL should point to Clair's notification API endpoint. .amqp.uris String A list of one or more AMQP brokers to connect to, in priority order. .amqp.tls Object Configures TLS/SSL connection to an AMQP broker. .amqp.tls.root_ca String The filesystem path where a root CA can be read. .amqp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. [NOTE] ==== Clair also allows SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .amqp.tls.key String The filesystem path where a TLS/SSL private key can be read. Example AMQP configuration The following example shows a hypothetical AMQP configuration for Clair. Example AMQP configuration # ... notifier: # ... amqp: exchange: name: "" type: "direct" durable: true auto_delete: false uris: ["amqp://user:pass@host:10000/vhost"] direct: false routing_key: "notifications" callback: "http://clair-notifier/notifier/api/v1/notifications" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 9.8.3. Clair STOMP configuration fields The following Simple Text Oriented Message Protocol (STOMP) fields are available for the Clair notifier environment. .stomp Object Configures the notifier for STOMP delivery. .stomp.direct Boolean If true , the notifier delivers individual notifications (not a callback) to the configured STOMP broker. .stomp.rollup Integer If stomp.direct is set to true , this value limits the number of notifications sent in a single direct delivery. For example, if direct is set to true , and rollup is set to 5 , the notifier delivers no more than 5 notifications in a single JSON payload to the broker. Setting the value to 0 effectively sets it to 1 . .stomp.callback String If stomp.callback is set to false , the provided URL in the notification callback is sent to the broker. This URL should point to Clair's notification API endpoint. .stomp.destination String The STOMP destination to deliver notifications to. .stomp.uris String A list of one or more STOMP brokers to connect to in priority order. .stomp.tls Object Configured TLS/SSL connection to STOMP broker. .stomp.tls.root_ca String The filesystem path where a root CA can be read. [NOTE] ==== Clair also respects SSL_CERT_DIR , as documented for the Go crypto/x509 package. ==== .stomp.tls.cert String The filesystem path where a TLS/SSL certificate can be read. .stomp.tls.key String The filesystem path where a TLS/SSL private key can be read. .stomp.user String Configures login details for the STOMP broker. .stomp.user.login String The STOMP login to connect with. .stomp.user.passcode String The STOMP passcode to connect with. Example STOMP configuration The following example shows a hypothetical STOMP configuration for Clair. Example STOMP configuration # ... notifier: # ... stomp: desitnation: "notifications" direct: false callback: "http://clair-notifier/notifier/api/v1/notifications" login: login: "username" passcode: "passcode" tls: root_ca: "optional/path/to/rootca" cert: "madatory/path/to/cert" key: "madatory/path/to/key" # ... 9.9. Clair authorization configuration fields The following authorization configuration fields are available for Clair. Field Type Description auth Object Defines Clair's external and intra-service JWT based authentication. If multiple auth mechanisms are defined, Clair picks one. Currently, multiple mechanisms are unsupported. .psk String Defines pre-shared key authentication. .psk.key String A shared base64 encoded key distributed between all parties signing and verifying JWTs. .psk.iss String A list of JWT issuers to verify. An empty list accepts any issuer in a JWT claim. Example authorization configuration The following authorization snippet is for a minimal configuration. Example authorization configuration # ... auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: ["quay"] # ... 9.10. Clair trace configuration fields The following trace configuration fields are available for Clair. Field Type Description trace Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the application traces will belong to. .probability Integer The probability a trace will occur. .jaeger Object Defines values for Jaeger tracing. .jaeger.agent Object Defines values for configuring delivery to a Jaeger agent. .jaeger.agent.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector Object Defines values for configuring delivery to a Jaeger collector. .jaeger.collector.endpoint String An address in the <host>:<post> syntax where traces can be submitted. .jaeger.collector.username String A Jaeger username. .jaeger.collector.password String A Jaeger password. .jaeger.service_name String The service name registered in Jaeger. .jaeger.tags String Key-value pairs to provide additional metadata. .jaeger.buffer_max Integer The maximum number of spans that can be buffered in memory before they are sent to the Jaeger backend for storage and analysis. Example trace configuration The following example shows a hypothetical trace configuration for Clair. Example trace configuration # ... trace: name: "jaeger" probability: 1 jaeger: agent: endpoint: "localhost:6831" service_name: "clair" # ... 9.11. Clair metrics configuration fields The following metrics configuration fields are available for Clair. Field Type Description metrics Object Defines distributed tracing configuration based on OpenTelemetry. .name String The name of the metrics in use. .prometheus String Configuration for a Prometheus metrics exporter. .prometheus.endpoint String Defines the path where metrics are served. Example metrics configuration The following example shows a hypothetical metrics configuration for Clair. Example metrics configuration # ... metrics: name: "prometheus" prometheus: endpoint: "/metricsz" # ...
[ "clair -conf ./path/to/config.yaml -mode indexer", "clair -conf ./path/to/config.yaml -mode matcher", "export HTTP_PROXY=http://<user_name>:<password>@<proxy_host>:<proxy_port>", "export HTTPS_PROXY=https://<user_name>:<password>@<proxy_host>:<proxy_port>", "export SSL_CERT_DIR=/<path>/<to>/<ssl>/<certificates>", "export NO_PROXY=<comma_separated_list_of_hosts_and_domains>", "http_listen_addr: \"\" introspection_addr: \"\" log_level: \"\" tls: {} indexer: connstring: \"\" scanlock_retry: 0 layer_scan_concurrency: 5 migrations: false scanner: {} airgap: false matcher: connstring: \"\" indexer_addr: \"\" migrations: false period: \"\" disable_updaters: false update_retention: 2 matchers: names: nil config: nil updaters: sets: nil config: nil notifier: connstring: \"\" migrations: false indexer_addr: \"\" matcher_addr: \"\" poll_interval: \"\" delivery_interval: \"\" disable_summary: false webhook: null amqp: null stomp: null auth: psk: nil trace: name: \"\" probability: null jaeger: agent: endpoint: \"\" collector: endpoint: \"\" username: null password: null service_name: \"\" tags: nil buffer_max: 0 metrics: name: \"\" prometheus: endpoint: null dogstatsd: url: \"\"", "http_listen_addr: 0.0.0.0:6060 introspection_addr: 0.0.0.0:8089 log_level: info", "indexer: connstring: host=quay-server.example.com port=5433 dbname=clair user=clairuser password=clairpass sslmode=disable scanlock_retry: 10 layer_scan_concurrency: 5 migrations: true", "matcher: connstring: >- host=<DB_HOST> port=5432 dbname=<matcher> user=<DB_USER> password=D<B_PASS> sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ disable_updaters: false migrations: true period: 6h update_retention: 2", "matchers: names: - \"alpine-matcher\" - \"aws\" - \"debian\" - \"oracle\"", "updaters: sets: - rhel config: rhel: ignore_unpatched: false", "notifier: connstring: >- host=DB_HOST port=5432 dbname=notifier user=DB_USER password=DB_PASS sslmode=verify-ca sslcert=/etc/clair/ssl/cert.pem sslkey=/etc/clair/ssl/key.pem sslrootcert=/etc/clair/ssl/ca.pem indexer_addr: http://clair-v4/ matcher_addr: http://clair-v4/ delivery_interval: 5s migrations: true poll_interval: 15s webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" headers: \"\" amqp: null stomp: null", "notifier: webhook: target: \"http://webhook/\" callback: \"http://clair-notifier/notifier/api/v1/notifications\"", "notifier: amqp: exchange: name: \"\" type: \"direct\" durable: true auto_delete: false uris: [\"amqp://user:pass@host:10000/vhost\"] direct: false routing_key: \"notifications\" callback: \"http://clair-notifier/notifier/api/v1/notifications\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "notifier: stomp: desitnation: \"notifications\" direct: false callback: \"http://clair-notifier/notifier/api/v1/notifications\" login: login: \"username\" passcode: \"passcode\" tls: root_ca: \"optional/path/to/rootca\" cert: \"madatory/path/to/cert\" key: \"madatory/path/to/key\"", "auth: psk: key: MTU5YzA4Y2ZkNzJoMQ== 1 iss: [\"quay\"]", "trace: name: \"jaeger\" probability: 1 jaeger: agent: endpoint: \"localhost:6831\" service_name: \"clair\"", "metrics: name: \"prometheus\" prometheus: endpoint: \"/metricsz\"" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/vulnerability_reporting_with_clair_on_red_hat_quay/config-fields-overview
15.4. Managing Bookmarks
15.4. Managing Bookmarks You can save a reference to a location by bookmarking it. Procedure 15.3. To bookmark a Location: Select the folder or file you want to bookmark. Press Ctrl + D . The first time a bookmark is activated, the GVFS subsystem looks for existing mounts and spawns a new one if not already present. This way you are able to authenticate even within the open or save dialog. Bookmarks are well integrated in GTK+ and the GNOME Desktop: every application that presents a standard GTK+ open or save dialog (technically called GtkFileChooser ) lists bookmarks in the left panel of the dialog. Also Nautilus and its clones present bookmarks in a sidebar or, more universally, in the Files menu. Note If you have no pages bookmarked yet, the Bookmarks label does not display. Besides Bookmarks , all other available GVFS volumes and mounts are listed in the GtkFileChooser sidebar. Sometimes a bookmark and a GVFS volume combine into a single item to prevent duplication and confusion. Bookmarks then can have eject icon just like GVFS mounts. Bookmarks are located in the ~/.config/gtk-3.0/bookmarks file. In the example below, the bookmarked locations are ~/Music , ~/Pictures , ~/Videos , ~/Downloads , and ~/bin , so the content of the ~/.config/gtk-3.0/bookmarks file looks as follows: Example 15.3. The ~/.config/gtk-3.0/bookmarks File Replace username with the user name you want to use. Procedure 15.4. To edit Bookmarks: Open the Files menu on the top bar. Click Bookmark to open the bookmark editor.
[ "file:///home/ username /Music file:///home/ username /Pictures file:///home/ username /Videos file:///home/ username /Downloads file:///home/ username /bin" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/desktop_migration_and_administration_guide/nautilus-gtkfilechooser-bookmarks
Chapter 25. OperatorPKI [network.operator.openshift.io/v1]
Chapter 25. OperatorPKI [network.operator.openshift.io/v1] Description OperatorPKI is a simple certificate authority. It is not intended for external use - rather, it is internal to the network operator. The CNO creates a CA and a certificate signed by that CA. The certificate has both ClientAuth and ServerAuth extended usages enabled. A Secret called <name>-ca with two data keys: tls.key - the private key tls.crt - the CA certificate A ConfigMap called <name>-ca with a single data key: cabundle.crt - the CA certificate(s) A Secret called <name>-cert with two data keys: tls.key - the private key tls.crt - the certificate, signed by the CA The CA certificate will have a validity of 10 years, rotated after 9. The target certificate will have a validity of 6 months, rotated after 3 The CA certificate will have a CommonName of "<namespace>_<name>-ca@<timestamp>", where <timestamp> is the last rotation time. Type object Required spec 25.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorPKISpec is the PKI configuration. status object OperatorPKIStatus is not implemented. 25.1.1. .spec Description OperatorPKISpec is the PKI configuration. Type object Required targetCert Property Type Description targetCert object targetCert configures the certificate signed by the CA. It will have both ClientAuth and ServerAuth enabled 25.1.2. .spec.targetCert Description targetCert configures the certificate signed by the CA. It will have both ClientAuth and ServerAuth enabled Type object Required commonName Property Type Description commonName string commonName is the value in the certificate's CN 25.1.3. .status Description OperatorPKIStatus is not implemented. Type object 25.2. API endpoints The following API endpoints are available: /apis/network.operator.openshift.io/v1/operatorpkis GET : list objects of kind OperatorPKI /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis DELETE : delete collection of OperatorPKI GET : list objects of kind OperatorPKI POST : create an OperatorPKI /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis/{name} DELETE : delete an OperatorPKI GET : read the specified OperatorPKI PATCH : partially update the specified OperatorPKI PUT : replace the specified OperatorPKI 25.2.1. /apis/network.operator.openshift.io/v1/operatorpkis HTTP method GET Description list objects of kind OperatorPKI Table 25.1. HTTP responses HTTP code Reponse body 200 - OK OperatorPKIList schema 401 - Unauthorized Empty 25.2.2. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis HTTP method DELETE Description delete collection of OperatorPKI Table 25.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorPKI Table 25.3. HTTP responses HTTP code Reponse body 200 - OK OperatorPKIList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorPKI Table 25.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 25.5. Body parameters Parameter Type Description body OperatorPKI schema Table 25.6. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 201 - Created OperatorPKI schema 202 - Accepted OperatorPKI schema 401 - Unauthorized Empty 25.2.3. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis/{name} Table 25.7. Global path parameters Parameter Type Description name string name of the OperatorPKI HTTP method DELETE Description delete an OperatorPKI Table 25.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 25.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorPKI Table 25.10. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorPKI Table 25.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 25.12. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorPKI Table 25.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 25.14. Body parameters Parameter Type Description body OperatorPKI schema Table 25.15. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 201 - Created OperatorPKI schema 401 - Unauthorized Empty
[ "More specifically, given an OperatorPKI with <name>, the CNO will manage:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/operatorpki-network-operator-openshift-io-v1
Chapter 17. Uninstalling a cluster on AWS
Chapter 17. Uninstalling a cluster on AWS You can remove a cluster that you deployed to Amazon Web Services (AWS). 17.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 17.2. Deleting Amazon Web Services resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Amazon Web Services (AWS) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on AWS that uses short-term credentials. Procedure Delete the AWS resources that ccoctl created by running the following command: USD ccoctl aws delete \ --name=<name> \ 1 --region=<aws_region> 2 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <aws_region> is the AWS region in which to delete cloud resources. Example output 2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted Verification To verify that the resources are deleted, query AWS. For more information, refer to AWS documentation. 17.3. Deleting a cluster with a configured AWS Local Zone infrastructure After you install a cluster on Amazon Web Services (AWS) into an existing Virtual Private Cloud (VPC), and you set subnets for each Local Zone location, you can delete the cluster and any AWS resources associated with it. The example in the procedure assumes that you created a VPC and its subnets by using a CloudFormation template. Prerequisites You know the name of the CloudFormation stacks, <local_zone_stack_name> and <vpc_stack_name> , that were used during the creation of the network. You need the name of the stack to delete the cluster. You have access rights to the directory that contains the installation files that were created by the installation program. Your account includes a policy that provides you with permissions to delete the CloudFormation stack. Procedure Change to the directory that contains the stored installation program, and delete the cluster by using the destroy cluster command: USD ./openshift-install destroy cluster --dir <installation_directory> \ 1 --log-level=debug 2 1 For <installation_directory> , specify the directory that stored any files created by the installation program. 2 To view different log details, specify error , info , or warn instead of debug . Delete the CloudFormation stack for the Local Zone subnet: USD aws cloudformation delete-stack --stack-name <local_zone_stack_name> Delete the stack of resources that represent the VPC: USD aws cloudformation delete-stack --stack-name <vpc_stack_name> Verification Check that you removed the stack resources by issuing the following commands in the AWS CLI. The AWS CLI outputs that no template component exists. USD aws cloudformation describe-stacks --stack-name <local_zone_stack_name> USD aws cloudformation describe-stacks --stack-name <vpc_stack_name> Additional resources See Working with stacks in the AWS documentation for more information about AWS CloudFormation stacks. Opt into AWS Local Zones AWS Local Zones available locations AWS Local Zones features
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ccoctl aws delete --name=<name> \\ 1 --region=<aws_region> 2", "2021/04/08 17:50:41 Identity Provider object .well-known/openid-configuration deleted from the bucket <name>-oidc 2021/04/08 17:50:42 Identity Provider object keys.json deleted from the bucket <name>-oidc 2021/04/08 17:50:43 Identity Provider bucket <name>-oidc deleted 2021/04/08 17:51:05 Policy <name>-openshift-cloud-credential-operator-cloud-credential-o associated with IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:05 IAM Role <name>-openshift-cloud-credential-operator-cloud-credential-o deleted 2021/04/08 17:51:07 Policy <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials associated with IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:07 IAM Role <name>-openshift-cluster-csi-drivers-ebs-cloud-credentials deleted 2021/04/08 17:51:08 Policy <name>-openshift-image-registry-installer-cloud-credentials associated with IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:08 IAM Role <name>-openshift-image-registry-installer-cloud-credentials deleted 2021/04/08 17:51:09 Policy <name>-openshift-ingress-operator-cloud-credentials associated with IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:10 IAM Role <name>-openshift-ingress-operator-cloud-credentials deleted 2021/04/08 17:51:11 Policy <name>-openshift-machine-api-aws-cloud-credentials associated with IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:11 IAM Role <name>-openshift-machine-api-aws-cloud-credentials deleted 2021/04/08 17:51:39 Identity Provider with ARN arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com deleted", "./openshift-install destroy cluster --dir <installation_directory> \\ 1 --log-level=debug 2", "aws cloudformation delete-stack --stack-name <local_zone_stack_name>", "aws cloudformation delete-stack --stack-name <vpc_stack_name>", "aws cloudformation describe-stacks --stack-name <local_zone_stack_name>", "aws cloudformation describe-stacks --stack-name <vpc_stack_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/uninstalling-cluster-aws
Chapter 25. Getting started with flamegraphs
Chapter 25. Getting started with flamegraphs As a system administrator, you can use flamegraphs to create visualizations of system performance data recorded with the perf tool. As a software developer, you can use flamegraphs to create visualizations of application performance data recorded with the perf tool. Sampling stack traces is a common technique for profiling CPU performance with the perf tool. Unfortunately, the results of profiling stack traces with perf can be extremely verbose and labor-intensive to analyze. flamegraphs are visualizations created from data recorded with perf to make identifying hot code-paths faster and easier. 25.1. Installing flamegraphs To begin using flamegraphs , install the required package. Procedure Install the flamegraphs package: 25.2. Creating flamegraphs over the entire system This procedure describes how to visualize performance data recorded over an entire system using flamegraphs . Prerequisites flamegraphs are installed as described in installing flamegraphs . The perf tool is installed as described in installing perf . Procedure Record the data and create the visualization: This command samples and records performance data over the entire system for 60 seconds, as stipulated by use of the sleep command, and then constructs the visualization which will be stored in the current active directory as flamegraph.html . The command samples call-graph data by default and takes the same arguments as the perf tool, in this particular case: -a Stipulates to record data over the entire system. -F To set the sampling frequency per second. Verification For analysis, view the generated visualization: This command opens the visualization in the default browser: 25.3. Creating flamegraphs over specific processes You can use flamegraphs to visualize performance data recorded over specific running processes. Prerequisites flamegraphs are installed as described in installing flamegraphs . The perf tool is installed as described in installing perf . Procedure Record the data and create the visualization: This command samples and records performance data of the processes with the process ID's ID1 and ID2 for 60 seconds, as stipulated by use of the sleep command, and then constructs the visualization which will be stored in the current active directory as flamegraph.html . The command samples call-graph data by default and takes the same arguments as the perf tool, in this particular case: -a Stipulates to record data over the entire system. -F To set the sampling frequency per second. -p To stipulate specific process ID's to sample and record data over. Verification For analysis, view the generated visualization: This command opens the visualization in the default browser: 25.4. Interpreting flamegraphs Each box in the flamegraph represents a different function in the stack. The y-axis shows the depth of the stack with the topmost box in each stack being the function that was actually on-CPU and everything below it being ancestry. The x-axis displays the population of the sampled call-graph data. The children of a stack in a given row are displayed based on the number of samples taken of each respective function in descending order along the x-axis; the x-axis does not represent the passing of time. The wider an individual box is, the more frequent it was on-CPU or part of an on-CPU ancestry at the time the data was being sampled. Procedure To reveal the names of functions which may have not been displayed previously and further investigate the data click on a box within the flamegraph to zoom into the stack at that given location: To return to the default view of the flamegraph, click Reset Zoom . Important Boxes representing user-space functions may be labeled as Unknown in flamegraphs because the binary of the function is stripped. The debuginfo package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information. Use the -g option in GCC, to display the function names or symbols in such a situation. Additional resources Why perf displays some function names as raw functions addresses Enabling debugging with debugging information
[ "dnf install js-d3-flame-graph", "perf script flamegraph -a -F 99 sleep 60", "xdg-open flamegraph.html", "perf script flamegraph -a -F 99 -p ID1,ID2 sleep 60", "xdg-open flamegraph.html" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/getting-started-with-flamegraphs_monitoring-and-managing-system-status-and-performance
Chapter 1. Enabling Directory Server repositories
Chapter 1. Enabling Directory Server repositories Before installing Directory Server packages, you must enable Directory Server repositories. Depending on your subscription type, you can enable the following Directory Server repositories: The default Directory Server repository The Extended Update Support (EUS) repository Use the following procedure to enable Directory Server 12 and Directory Server 12 EUS repositories. Prerequisites You registered the system to the Red Hat Subscription Management service. You have a valid Red Hat Directory Server subscription in your Red Hat account. Procedure Optional: If you do not have enabled RHEL default BaseOS and AppStream repositories, you must enable these repositories: To enable the default RHEL repositories, run: To enable the RHEL EUS repositories: Optional: If you do not have the RHEL EUS release set to the minor version, for example to 9.4, run: Enable the RHEL EUS repositories: Enable the Directory Server repository: To enable the default Directory Server repository, run: To enable the Directory Server EUS repository for the, run: Important When you enable the Directory Server EUS repository, make sure that you also enabled RHEL EUS repositories. Verification List enabled default Directory Server repositories: The following is the output of the command for Directory Server 12.5 and Directory Server 12.4 EUS releases: For Directory Server 12.5, the command displays: For Directory Server EUS 12.4, the command displays: steps Setting up an instance using the command line Setting up a new instance using the web console Additional resources Using Red Hat Subscription Manager RHDS - Which subscription is required? (Red Hat Knowledgebase) RHEL EUS Overview How to tie a system to a specific update of Red Hat Enterprise Linux? (Red Hat Knowledgebase)
[ "subscription-manager repos --enable=rhel-9-for-x86_64-appstream-rpms --enable=rhel-9-for-x86_64-baseos-rpms Repository 'rhel-9-for-x86_64-appstream-rpms' is enabled for this system. Repository 'rhel-9-for-x86_64-baseos-rpms' is enabled for this system.", "subscription-manager release --set=9.4", "subscription-manager repos --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-baseos-eus-rpms Repository 'rhel-9-for-x86_64-appstream- eus -rpms' is enabled for this system. Repository 'rhel-9-for-x86_64-baseos- eus -rpms' is enabled for this system.", "subscription-manager repos --enable=dirsrv-12-for-rhel-9-x86_64-rpms Repository 'dirsrv-12-for-rhel-9-x86_64-rpms' is enabled for this system.", "subscription-manager repos --enable=dirsrv-12-for-rhel-9-x86_64-eus-rpms Repository 'dirsrv-12-for-rhel-9-x86_64- eus -rpms' is enabled for this system.", "subscription-manager repos --list-enabled", "+----------------------------------------------------------+ Available Repositories in /etc/yum.repos.d/redhat.repo +----------------------------------------------------------+ Repo ID: rhel- 9 -for-x86_64-appstream-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel9/9/x86_64/appstream/os Enabled: 1 Repo ID: rhel- 9 -for-x86_64-baseos-rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel9/9/x86_64/baseos/os Enabled: 1 Repo ID: dirsrv- 12 -for-rhel-9-x86_64-rpms Repo Name: Red Hat Directory Server 12 for RHEL 9 x86_64 (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel9/9/x86_64/dirsrv/12/os Enabled: 1", "+----------------------------------------------------------+ Available Repositories in /etc/yum.repos.d/redhat.repo +----------------------------------------------------------+ Repo ID: rhel- 9 -for-x86_64-appstream- eus -rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel9/9.4/x86_64/appstream/os Enabled: 1 Repo ID: rhel- 9 -for-x86_64-baseos- eus -rpms Repo Name: Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel9/9.4/x86_64/baseos/os Enabled: 1 Repo ID: dirsrv- 12 -for-rhel-9-x86_64- eus -rpms Repo Name: Red Hat Directory Server 12 for RHEL 9 x86_64 (RPMs) Repo URL: https://cdn.redhat.com/content/dist/rhel9/9.4/x86_64/dirsrv/12/os Enabled: 1" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/enabling-ds-repositories_installing-rhds
Chapter 8. Machine Config Daemon metrics overview
Chapter 8. Machine Config Daemon metrics overview The Machine Config Daemon is a part of the Machine Config Operator. It runs on every node in the cluster. The Machine Config Daemon manages configuration changes and updates on each of the nodes. 8.1. Understanding Machine Config Daemon metrics Beginning with OpenShift Container Platform 4.3, the Machine Config Daemon provides a set of metrics. These metrics can be accessed using the Prometheus Cluster Monitoring stack. The following table describes this set of metrics. Some entries contain commands for getting specific logs. However, the most comprehensive set of logs is available using the oc adm must-gather command. Note Metrics marked with * in the Name and Description columns represent serious errors that might cause performance problems. Such problems might prevent updates and upgrades from proceeding. Table 8.1. MCO metrics Name Format Description Notes mcd_host_os_and_version []string{"os", "version"} Shows the OS that MCD is running on, such as RHCOS or RHEL. In case of RHCOS, the version is provided. mcd_drain_err* Logs errors received during failed drain. * While drains might need multiple tries to succeed, terminal failed drains prevent updates from proceeding. The drain_time metric, which shows how much time the drain took, might help with troubleshooting. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_pivot_err* []string{"err", "node", "pivot_target"} Logs errors encountered during pivot. * Pivot errors might prevent OS upgrades from proceeding. For further investigation, run this command to see the logs from the machine-config-daemon container: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_state []string{"state", "reason"} State of Machine Config Daemon for the indicated node. Possible states are "Done", "Working", and "Degraded". In case of "Degraded", the reason is included. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_kubelet_state* Logs kubelet health failures. * This is expected to be empty, with failure count of 0. If failure count exceeds 2, the error indicating threshold is exceeded. This indicates a possible issue with the health of the kubelet. For further investigation, run this command to access the node and see all its logs: USD oc debug node/<node> - chroot /host journalctl -u kubelet mcd_reboot_err* []string{"message", "err", "node"} Logs the failed reboots and the corresponding errors. * This is expected to be empty, which indicates a successful reboot. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon mcd_update_state []string{"config", "err"} Logs success or failure of configuration updates and the corresponding errors. The expected value is rendered-master/rendered-worker-XXXX . If the update fails, an error is present. For further investigation, see the logs by running: USD oc logs -f -n openshift-machine-config-operator machine-config-daemon-<hash> -c machine-config-daemon Additional resources About OpenShift Container Platform monitoring Gathering data about your cluster
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_configuration/machine-config-daemon-metrics
Chapter 1. Securing the Server and Its Interfaces
Chapter 1. Securing the Server and Its Interfaces 1.1. Building Blocks 1.1.1. Interfaces and socket bindings JBoss EAP utilizes its host's interfaces, for example inet-address and nic , as well as ports for communication for both its web applications as well as its management interfaces. These interfaces and ports are defined and configured through the interfaces and socket-binding-groups settings in the JBoss EAP. For more information on how to define and configure interfaces and socket-binding-groups , see the Socket Bindings section of the JBoss EAP Configuration Guide . Example: Interfaces <interfaces> <interface name="management"> <inet-address value="USD{jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="USD{jboss.bind.address:127.0.0.1}"/> </interface> </interfaces> Example: Socket Binding Group <socket-binding-group name="standard-sockets" default-interface="public" port-offset="USD{jboss.socket.binding.port-offset:0}"> <socket-binding name="management-http" interface="management" port="USD{jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="USD{jboss.management.https.port:9993}"/> <socket-binding name="ajp" port="USD{jboss.ajp.port:8009}"/> <socket-binding name="http" port="USD{jboss.http.port:8080}"/> <socket-binding name="https" port="USD{jboss.https.port:8443}"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group> 1.1.2. Elytron Subsystem 1.1.2.1. Enable Elytron Security Across the Server There is a simple way to enable Elytron across the server. JBoss EAP 7.1 introduced an example configuration script that enables Elytron as the security provider. This script resides in the EAP_HOME /docs/examples directory in the server installation. Execute the following command to enable Elytron security across the server. 1.1.2.2. Create an Elytron Security Domain Security domains in the elytron subsystem, when used in conjunction with security realms, are used for both core management authentication as well as for authentication with applications. Important Deployments are limited to using one Elytron security domain per deployment. Scenarios that may have required multiple legacy security domains can now be accomplished using a single Elytron security domain. Add a Security Domain Using the Management CLI Add a Security Domain Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Other Settings and click View . Select SSL Security Domain and use the Add button to configure a new security domain. 1.1.2.3. Create an Elytron Security Realm Security realms in the elytron subsystem, when used in conjunction with security domains, are used for both core management authentication as well as for authentication with applications. Security realms are also specifically typed based on their identity store, for example jdbc-realm , filesystem-realm , properties-realm , etc. Add a Security Realm Using the Management CLI Examples of adding specific realms, such as jdbc-realm , filesystem-realm , and properties-realm can be found in sections. Add a Security Realm Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Security Realms and click View . Select the appropriate security realm type from the Security Realm tab and click Add to configure a new security realm. 1.1.2.4. Create an Elytron Role Decoder A role decoder converts attributes from the identity provided by the security realm into roles. Role decoders are also specifically typed based on their functionality, for example empty-role-decoder , simple-role-decoder , and custom-role-decoder . Add a Role Decoder Using the Management CLI Add a Role Decoder Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Mappers / Decoders and click View . Click on Role Decoder , select the appropriate role decoder type and click Add to configure a new role decoder. 1.1.2.5. Adding a source-address-role-decoder to the elytron subsystem You can use either the management CLI or the Management Console to add the source-address-role-decoder role decoder to the elytron subsystem. By configuring this role decoder in the mappers element, you make use of the IP address of a client when making authorization decisions. The source-address-role-decoder extracts the IP address of a client and checks if it matches the IP address specified in the pattern attribute or the source-address attribute. If the IP address of the client matches the IP address specified in either attribute then elytron uses the roles attribute to assign roles to the user. Note The procedure uses the management CLI to add source-address-role-decoder to the mappers element in the elytron subsystem. If you want to use the Management Console to complete this task, refer to the link provided in the Additional resources section. Prerequisites Note the IP address of the server's client. Procedure In the elytron subsystem, use the management CLI to add source-address-role-decoder . For a source-address-role-decoder , you must specify an IP address and at least one role for a user. Example of adding the source-address-role-decoder to the mappers element: The example shows a configured source-address-role-decoder , named as decoder1 . When a client attempts to connect to a server, the elytron subsystem uses the source-address-role-decoder to check that the client's IP address matches the IP address that was specified in either the pattern attribute or the source-address attribute. In the example, the source-address-role-decoder checks if the client's IP address is 10.10.10.10 . If the client's IP address is 10.10.10.10 then elytron uses the roles attribute to assign the Administrator role to the user. Note You can configure a source-address-role-decoder to assign specific roles to a user who needs to establish connections from different networks. In the security-domain , reference the configured source-address-role-decoder in the role-decoder attribute. This ensures that an Elytron security domain uses source-address-role-decoder when making authorization decisions. Example of referencing a configured source-address-role-decoder , decoder1 , in the role-decoder attribute: Additional resources For information about adding a role decoder with the management console, see Elytron Subsystem . For information about the elytron subsystem, see Elytron Subsystem in the Security Architecture guide. 1.1.3. Configuring an aggregate-role-decoder to the elytron subsystem The aggregate-role-decoder consists of two or more role decoders. You can use an aggregate-role-decoder to aggregate the roles returned from each role decoder. Prerequisites Configure at least two role decoders in the elytron subsystem. Procedure Add at least two role decoders to the aggregate-role-decoder role decoder. Example of adding decoder1 and decoder2 to the aggregate-role-decoder role decoder: Additional resources For information about available role decoders in the elytron subsystem, see Resources in the Elytron Subsystem in the Security Architecture guide. For information about creating a role decoder, see Elytron Subsystem . 1.1.3.1. Create an Elytron Role Mapper A role mapper maps roles after they have been decoded to other roles. Examples include normalizing role names or adding and removing specific roles from principals after they have been decoded. Role mappers are also specifically typed based on their functionality, for example add-prefix-role-mapper , add-suffix-role-mapper , and constant-role-mapper . Adding a Role Mapper Takes the General Form Adding a Role Mapper Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Mappers / Decoders and click View . Click on Role Mapper , select the appropriate role mapper type and click Add to configure a new role mapper. 1.1.3.2. Create an Elytron Permission Set Permission sets can be used to assign permissions to an identity. Add a Permission Set Using the Management CLI The permissions parameter consists of a set of permissions, where each permission has the following attributes: class-name is the fully qualified class name of the permission. This is the only permission attribute that is required. module is an optional module used to load the permission. target-name is an optional target name passed to the permission as it is constructed. action is an optional action passed to the permission as it is constructed. 1.1.3.3. Create an Elytron Permission Mapper In addition to roles being assigned to a identity, permissions may also be assigned. A permission mapper assigns permissions to an identity. Permission mappers are also specifically typed based on their functionality, for example logical-permission-mapper , simple-permission-mapper , and custom-permission-mapper . Add a Permission Mapper Using the Management CLI Add a Permission Mapper Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Mappers / Decoders and click View . Click on Principal Decoder , select the appropriate principal decoder type and click Add to configure a new principal decoder. 1.1.3.4. Creating an Authentication Configuration An authentication configuration contains the credentials to use when making a connection. For more information on authentication configurations, see Configure Client Authentication with Elytron Client in How to Configure Identity Management for JBoss EAP. Note Instead of a credential store, you can configure an Elytron security domain to use the credentials of the accessing user. For instance, a security domain can be used in conjunction with Kerberos for authenticating incoming users. Follow the instructions in Configure the Elytron Subsystem in How to Set Up SSO with Kerberos for JBoss EAP, and set obtain-kerberos-ticket=true in the Kerberos security factory. Add an Authentication Configuration Using the Management CLI Add an Authentication Configuration Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Other Settings and click View . Click on Authentication Authentication Configuration and click Add to configure a new authentication configuration. For the full list of authentication-configuration attributes, see Elytron Subsystem Components Reference . 1.1.3.5. Creating an Authentication Context An authentication context contains a set of rules and either authentication configurations or SSL contexts to use for establishing a connection. For more information on authentication contexts, see Configure Client Authentication with Elytron Client in How to Configure Identity Management for JBoss EAP. Add an Authentication Context Using the Management CLI An authentication context can be created using the following management CLI command. Typically, an authentication context will contain a set of rules and either an authentication configuration or a SSL context. The following CLI command provides demonstrates defining an authentication context that only functions when the hostname is localhost . Add an Authentication Context Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Other Settings and click View . Click on Authentication Authentication Context and click Add to configure a new authentication context. For the full list of authentication-context attributes, see Elytron Subsystem Components Reference . 1.1.3.6. Create an Elytron Authentication Factory An authentication factory is an authentication policy used for specific authentication mechanisms. Authentication factories are specifically based on the authentication mechanism, for example http-authentication-factory , sasl-authentication-factory and kerberos-security-factory . Add an Authentication Factory Using the Management CLI Add an Authentication Factory Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Factories / Transformers and click View . Click on HTTP Factories , SASL Factories , or Other Factories , choose the appropriate factory type, and click Add to configure a new factory. 1.1.3.7. Create an Elytron Keystore A key-store is the definition of a keystore or truststore including the type of keystore, its location, and the credential for accessing it. To generate an example keystore for use with the elytron subsystem, use the following command: USD keytool -genkeypair -alias localhost -keyalg RSA -keysize 1024 -validity 365 -keystore keystore.jks -dname "CN=localhost" -keypass secret -storepass secret Add a Keystore Using the Management CLI To define a key-store in Elytron that references the newly made keystore, execute the following management CLI command. This command species the path to the keystore, relative to the file system path provided, the credential reference used for accessing the keystore, and the type of keystore. Note The above command uses relative-to to reference the location of the keystore file. Alternatively, you can specify the full path to the keystore in path and omit relative-to . Add a Keystore Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Other Settings and click View . Click on Stores Key Store and click Add to configure a new keystore. 1.1.3.8. Create an Elytron Key Manager A key-manager references a key-store , and is used in conjunction with an SSL context. Add a Key Manager Using the Management CLI The following command specifies the underlying keystore to reference, the algorithm to use when initializing the key manager, and the credential reference for accessing the entries in the underlying keystore. Important Red Hat did not specify the algorithm attribute in the command, because the Elytron subsystem uses KeyManagerFactory.getDefaultAlgorithm() to determine an algorithm by default. However, you can specify the algorithm attribute. To specify the algorithm attribute, you need to know what key manager algorithms are provided by the JDK you are using. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. In the command you can specify SunX509 as the key manager algorithm attribute. Add a Key Manager Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Other Settings and click View . Click on SSL Key Manager and click Add to configure a new key manager. 1.1.3.9. Create an Elytron Truststore To create a truststore in Elytron execute the following CLI command. In order to successfully execute the command above you must have an application.truststore file inside your EAP_HOME /standalone/configuration directory. The truststore must contain the certificates associated with the endpoint or a certificate chain in case the end point's certificate is signed by a CA. Red Hat recommends you to avoid using self-signed certificates. Ideally, certificates should be signed by a CA and your truststore should contain a certificate chain representing your ROOT and intermediary CAs. 1.1.3.10. Create an Elytron Trust Manager To define a trust manager in Elytron execute the following CLI command. This sets the defined truststore as the source of the certificates that the application server trusts. 1.1.3.11. Using the Out of the Box Elytron Components JBoss EAP provides a default set of Elytron components configured in the elytron subsystem. You can find more details on these pre-configured components in the Out of the Box section of the Security Architecture guide. 1.1.3.11.1. Securing Management Interfaces You can find more details on the enabling JBoss EAP to use the out of the box Elytron components for securing the management interfaces in the User Authentication with Elytron section. 1.1.3.11.2. Securing Applications The elytron subsystem provides application-http-authentication for http-authentication-factory by default, which can be used to secure applications. For more information on how to configure application-http-authentication , see the Out of the Box section of the Security Architecture guide. To configure applications to use application-http-authentication , see Configure Web Applications to Use Elytron or Legacy Security for Authentication in How to Configure Identity Management Guide . You can also override the default behavior of all applications using the steps in the Override an Application's Authentication Configuration section of the JBoss EAP How to Configure Identity Management Guide . 1.1.3.11.3. Using SSL/TLS JBoss EAP does provide a default one-way SSL/TLS configuration using the legacy core management authentication, but it does not provide one in the elytron subsystem. You can find more details on configuring SSL/TLS using the elytron subsystem for both the management interfaces as well as for applications in the following sections: Enable One-way SSL/TLS for the Management Interfaces Using the Elytron Subsystem Enable Two-Way SSL/TLS for the Management Interfaces using the Elytron Subsystem Enable One-way SSL/TLS for Applications using the Elytron Subsystem Enable Two-Way SSL/TLS for Applications using the Elytron Subsystem 1.1.3.11.4. Using Elytron with Other Subsystems In addition to securing applications and management interfaces, Elytron also integrates with other subsystems in JBoss EAP. batch-jberet You can configure the batch-jberet subsystem to run batch jobs using an Elytron security domain. For more information, see Configure Security for Batch Jobs in the Configuration Guide . datasources You can use a credential store or an Elytron security domain to provide authentication information in a datasource definition. For more information, see Datasource Security in the Configuration Guide . ejb3 You can create mappings for Elytron security domains in the ejb3 subsystem to be referenced by deployments. For more information, see Elytron Integration with the EJB Subsystem in Developing Jakarta Enterprise Beans Applications . iiop-openjdk You can use the elytron subsystem to configure SSL/TLS between clients and servers using the iiop-openjdk subsystem. For more information, see Configure IIOP to use SSL/TLS with the Elytron Subsystem in the Configuration Guide . jca You can use the elytron-enabled attribute to enable Elytron security for a work manager. For more information, see Configuring the JCA Subsystem in the Configuration Guide . jgroups You can configure the SYM_ENCRYPT and ASYM_ENCRYPT protocols to reference keystores or credential references defined in the elytron subsystem. For more information, see Securing a Cluster in the Configuration Guide . mail You can use a credential store to provide authentication information in a server definition in the mail subsystem. For more information, see Use a Credential Store for Passwords in the Configuration Guide . messaging-activemq You can secure remote connections to the remote connections used by the messaging-activemq subsystem. For more information, see the Using the Elytron Subsystem section of Configuring Messaging . modcluster You can use an Elytron client ssl-context to communicate with a load balancer using SSL/TLS. For more information, see Elytron Integration with the ModCluster Subsystem . remoting You can configure inbound and outbound connections in the remoting subsystem to reference authentication contexts, SASL authentication factories, and SSL contexts defined in the elytron subsystem. For full details on configuring each type of connection, see Elytron Integration with the Remoting Subsystem . resource-adapters You can secure connections to the resource adapter using Elytron. You can enable security inflow to establish security credentials when submitting work to be executed by the work manager. For more information, see Configure Resource Adapters to Use the Elytron Subsystem in the Configuration Guide . undertow You can use the elytron subsystem to configure both SSL/TLS and application authentication. For more information on configuring application authentication, see Using SSL/TLS and Configure Web Applications to Use Elytron or Legacy Security for Authentication in How to Configure Identity Management . 1.1.3.12. Enable and Disable the Elytron Subsystem The elytron subsystem comes pre-configured with the default JBoss EAP profiles alongside the legacy security subsystem. If you are using a profile where the elytron subsystem has not been configured, you can add it by adding the elytron extension and enabling the elytron subsystem. To add the elytron extension required for the elytron subsystem: To enable the elytron subsystem in JBoss EAP: To disable the elytron subsystem in JBoss EAP: Important Other subsystems within JBoss EAP may have dependencies on the elytron subsystem. If these dependencies are not resolved before disabling it, you will see errors when starting JBoss EAP. 1.1.4. Legacy Security Subsystem 1.1.4.1. Disabling the security subsystem You can disable the security subsystem in JBoss EAP by executing the remove operation of the subsystem. Procedure Disable the security subsystem in JBoss EAP: Important Other subsystems within JBoss EAP may have dependencies on the security subsystem. If these dependencies are not resolved before disabling it, you will see errors when starting JBoss EAP. 1.1.4.2. Enabling the security subsystem You can enable the security subsystem in JBoss EAP by executing the add operation of the subsystem. Procedure Enable the security subsystem in JBoss EAP: 1.1.5. Legacy security realms JBoss EAP uses security realms to define authentication and authorization mechanisms, such as local, LDAP properties, which can then be used by the management interfaces. Example: Security realms <security-realms> <security-realm name="ManagementRealm"> <authentication> <local default-user="USDlocal" skip-group-loading="true"/> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> <authorization map-groups-to-roles="false"> <properties path="mgmt-groups.properties" relative-to="jboss.server.config.dir"/> </authorization> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <local default-user="USDlocal" allowed-users="*" skip-group-loading="true"/> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> <authorization> <properties path="application-roles.properties" relative-to="jboss.server.config.dir"/> </authorization> </security-realm> </security-realms> Note In addition to updating the existing security realms, JBoss EAP also allows you to create new security realms. You can create new security realms via the management console as well as invoking the following command from the management CLI: If you create a new security realm and want to use a properties file for authentication or authorization, you must create a new properties file specifically for the new security domain. JBoss EAP does not reuse existing files used by other security domains nor does it automatically create new files specified in the configuration if they do not exist. Additional resources For more information on security realms, see Security Realms . 1.1.6. Using authentication and socket bindings for securing the management interfaces You can use a combination of socket-binding , http-authentication-factory , and http-upgrade to secure the management interfaces using the elytron subsystem. Alternatively, you can use socket-binding with security-realm to secure the management interfaces with the legacy core management authentication. You can also disable the management interfaces, and configure users of the interfaces to have various roles and access rights. By default, JBoss EAP defines an http-interface to connect to the management interfaces. Procedure Display server management interfaces settings: 1.2. How to Secure the Management Interfaces The following sections show how to perform various operations related to securing the JBoss EAP management interfaces and related subsystems. Note The management CLI commands shown assume that you are running a JBoss EAP standalone server. For more details on using the management CLI for a JBoss EAP managed domain, see the JBoss EAP Management CLI Guide . Elytron Integration with the Management CLI The management interfaces can be secured using resources from the elytron subsystem in the same way as it is done by the legacy security realms. The SSL configuration for connections comes from one of these locations: Any SSL configuration within the CLI specific configuration. The default SSL configuration that automatically prompts users to accept the server's certificate. The java system property. Client configuration can be modified using the wildfly-config.xml file. Note If you set the -Dwildfly.config.url property, any file can be used by the client for configuration. 1.2.1. Configure networking and ports Depending on the configuration of the host, JBoss EAP may be configured to use various network interfaces and ports. This allows JBoss EAP to work with different host, networking, and firewall requirements. Additional resources For more information on the networking and ports used by JBoss EAP, as well as how to configure these settings, see the Network and Port Configuration section of the JBoss EAP Configuration Guide . 1.2.2. Disabling the management console Other clients, such as JBoss Operations Network, operate using the HTTP interface for managing JBoss EAP. In order to continue using these services, just the web-based management console itself may be disabled. This is accomplished by setting the console-enabled attribute to false . Procedure To disable the web-based management console in JBoss EAP: 1.2.3. Disabling remote access to JMX Remote access to the jmx subsystem allows for JDK and application management operations to be triggered remotely. Procedure To disable remote access to JMX in JBoss EAP, remove the remoting connector in the jmx subsystem: 1.2.4. Silent authentication The default installation of JBoss EAP contains a method of silent authentication for a local management CLI user. This allows the local user the ability to access the management CLI without username or password authentication. This functionality can be enabled to allow local users run the management CLI scripts without requiring authentication. It is considered a useful feature given that access to the local configuration typically also gives the user the ability to add their own user details or otherwise disable security checks. Silent authentication can be disabled where greater security control is required. This can be achieved by removing the local element within the security-realm attribute of the configuration file. This is applicable to both standalone instance as well as managed domain. Important The removal of the local element should only be done if the impact on the JBoss EAP instance and its configuration is fully understood. Procedure To remove silent authentication when using the elytron subsystem: To remove silent authentication when using a legacy security realm: 1.2.5. One-way SSL/TLS for the management interfaces using the Elytron subsystem In JBoss EAP, you can enable one-way SSL/TLS for the management interfaces using the JBoss EAP management CLI or the management console. In the management CLI, one-way SSL/TLS can be enabled in two ways: Using security command . Using elytron subsystem commands . In the management console, one-way SSL/TLS can be enabled in as follows: Using the management console 1.2.5.1. Enabling one-way SSL/TLS using a security command The security enable-ssl-management command can be used to enable one-way SSL/TLS for the management interfaces. Procedure Enter the security enable-ssl-management --interactive command in the CLI. Example Note Once the command is executed, the management CLI will reload the server and reconnect to it. You can disable one-way SSL/TLS for the management interfaces using the disable-ssl-management command. This command does not delete the Elytron resources. It configures the system to use the ApplicationRealm legacy security realm for its SSL configuration. 1.2.5.2. Enabling one-way SSL/TLS using the Elytron subsystem commands You can enable one-way SSL/TLS for the management interfaces using the elytron subsystem commands. Procedure Configure a key-store . Note The above command uses relative-to to reference the location of the keystore file. Alternatively, you can specify the full path to the keystore in path and omit relative-to . If the keystore file does not exist yet, the following commands can be used to generate an example key pair: Create a key-manager and server-ssl-context . Important Red Hat did not specify the algorithm attribute in the command, because the Elytron subsystem uses KeyManagerFactory.getDefaultAlgorithm() to determine an algorithm by default. However, you can specify the algorithm attribute. To specify the algorithm attribute, you need to know what key manager algorithms are provided by the JDK you are using. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. In the command you can specify SunX509 as the key manager algorithm attribute. You also need to determine what HTTPS protocols you want to support. The example commands above use TLSv1.2 . You can use the cipher-suite-filter to specify cipher suites, and the use-cipher-suites-order argument to honor server cipher suite order. The use-cipher-suites-order attribute by default is set to true . This differs from the legacy security subsystem behavior, which defaults to honoring client cipher suite order. Enable HTTPS on the management interface. Reload the JBoss EAP instance. One-way SSL/TLS is now enabled for the management interfaces. Important In cases where you have both a security-realm and ssl-context defined, JBoss EAP will use the SSL/TLS configuration provided by ssl-context . Additional resources key-store Attributes key-manager Attributes server-ssl-context Attributes 1.2.5.3. Enabling one-way SSL/TLS using the management console You can enable SSL for the management interface used by the management console using an SSL wizard in the management console. Procedure Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Runtime , click the appropriate server name. Click View to server name. Click HTTP Manageme... to open the HTTP Management Interface configuration page. Click Enable SSL to launch the wizard. The wizard guides you through the following scenarios for enabling SSL: You want to create a certificate store and generate a self-signed certificate. You want to obtain a certificate from Let's Encrypt Certificate Authority. You already have the certificate store on the file system, but no keystore configuration. You already have a keystore configuration that uses a valid certificate store. Using the wizard, you can optionally create a truststore for mutual authentication. 1.2.6. Two-way SSL/TLS for the management interfaces using the Elytron Subsystem In JBoss EAP, two-way SSL/TLS for the management interfaces can be enabled either by using a security command or by using the elytron subsystem commands. To enable two-way SSL/TLS, first you must obtain or generate a client certificate. You can generate a client certificate by using the following procedure: Generating client certificates You can then enable two-way SSL/TLS for the management interfaces using one of the following methods: Enabling two-way SSL/TLS using a security command Enabling two-way SSL/TLS using the Elytron subsystem commands 1.2.6.1. Generating client certificates You can generate client certificates using the keytool command in the CLI. Procedure Generate your client certificate: USD keytool -genkeypair -alias client -keyalg RSA -keysize 1024 -validity 365 -keystore client.keystore.jks -dname "CN=client" -keypass secret -storepass secret Export the client certificate: USD keytool -exportcert -keystore client.keystore.jks -alias client -keypass secret -storepass secret -file /path/to/client.cer 1.2.6.2. Enabling two-way SSL/TLS using a security command The security enable-ssl-management command can be used to enable two-way SSL/TLS for the management interfaces. Note The following example does not validate the certificate as no chain of trust exists. If you are using a trusted certificate, then the client certificate can be validated without issue. Prerequisites You have configured a client keystore. You have exported a certificate for a server trust store. For more information, see Generating client certificates . Procedure Enter the security enable-ssl-management --interactive command in the CLI. Example Note Once the command is executed, the management CLI will reload the server and attempt to reconnect to it. To complete the two-way SSL/TLS authentication, you need to import the server certificate into the client truststore and configure your client to present the client certificate. You can disable two-way SSL/TLS for the management interfaces using the disable-ssl-management command. security disable-ssl-management This command does not delete the Elytron resources. It configures the system to use the ApplicationRealm legacy security realm for its SSL configuration. 1.2.6.3. Enabling two-way SSL/TLS using the Elytron subsystem commands You can use the elytron subsystem commands to enable two-way SSL/TLS for the management interfaces. Prerequisites You have exported a certificate for a server trust store. For more information, see Generating client certificates . Procedure Obtain or generate your keystore. Before enabling one-way SSL/TLS in JBoss EAP, you must obtain or generate the keystores, truststores and certificates you plan on using. To generate an example set of keystores, truststores, and certificates, use the following commands. Configure a key-store . Note The above command uses relative-to to reference the location of the keystore file. Alternatively, you can specify the full path to the keystore in path and omit relative-to . Export your server certificate. Create a key-store for the server trust store and import the client certificate into the server truststore. Note The following example does not validate the certificate as no chain of trust exists. If you are using a trusted certificate, then the client certificate can be validated without issue. Configure a key-manager , trust-manager , and server-ssl-context for the server keystore and truststore. Important Red Hat did not specify the algorithm attribute in the command, because the Elytron subsystem uses KeyManagerFactory.getDefaultAlgorithm() and TrustManagerFactory.getDefaultAlgorithm() to determine an algorithm by default. However, you can specify the algorithm attribute. To specify the algorithm attribute, you need to know what key manager algorithms are provided by the JDK you are using. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. In the command you can specify SunX509 as the key manager algorithm attribute and PKIX as the trust manager algorithm attribute. You also need to determine what HTTPS protocols you want to support. The example commands above use TLSv1.2 . You can use the cipher-suite-filter to specify cipher suites, and the use-cipher-suites-order argument to honor server cipher suite order. The use-cipher-suites-order attribute by default is set to true . This differs from the legacy security subsystem behavior, which defaults to honoring client cipher suite order. Enable HTTPS on the management interface. Reload the JBoss EAP instance. Note To complete the two-way SSL/TLS authentication, you need to import the server certificate into the client truststore and configure your client to present the client certificate. Configure your client to use the client certificate. You need to configure your client to present the trusted client certificate to the server to complete the two-way SSL/TLS authentication. For example, if using a browser, you need to import the trusted certificate into the browser's trust store. This results in a forced two-way SSL/TLS authentication, without changing the original authentication to the server management. If you want to change the original authentication method, see Configure Authentication with Certificates in How to Configure Identity Management for JBoss EAP. Two-way SSL/TLS is now enabled for the management interfaces. Important In cases where you have both a security-realm and ssl-context defined, JBoss EAP will use the SSL/TLS configuration provided by ssl-context . Additional resources key-store Attributes key-manager Attributes server-ssl-context Attributes trust-manager Attributes 1.2.7. SASL authentication for the management interfaces using the CLI security command You can use the CLI security command to enable and disable SASL authentication for the management interfaces. You can also use the command to reorder SASL mechanisms. Enable SASL authentication In JBoss EAP, SASL authentication, using an elytron SASL authentication factory, can be enabled for the management interfaces with the security enable-sasl-management command. This command creates all of the non-existing resources required to configure authentication. By default this command associates the included SASL factory with the http-interface . Example: Enable SASL Authentication Note Once the command is executed, the management CLI will reload the server and reconnect to it. If a SASL factory already exists, then the factory is updated to use the mechanism defined by the --mechanism argument. For a list of arguments, see Authorization Security Arguments . Disable SASL authentication To remove the active SASL authentication factory use the following command: Alternatively, to remove specific mechanisms from the active SASL authentication factory, use the following command: Reorder SASL mechanisms The order of defined SASL mechanisms dictate how the server authenticates the request, with the first matching mechanism being sent to the client. You can change this order by passing a comma-separated to to the security reorder-sasl-management command, for example: Additional resources Security Authorization Arguments 1.2.8. HTTP authentication for the management interfaces using the CLI security command You can use the CLI security command to enable and disable HTTP authentication for the management interfaces. Enable HTTP authentication In JBoss EAP, HTTP authentication, using an elytron HTTP authentication factory, can be enabled for the management interfaces with the security enable-http-auth-management command. This command can only target the http-interface , and with no additional arguments the included HTTP authentication factory will be associated with this interface. Example: Enable HTTP Authentication Note Once the command is executed, the management CLI will reload the server and reconnect to it. If an HTTP factory already exists, then the factory is updated to use the mechanism defined by the --mechanism argument. For a list of arguments, see Authorization Security Arguments . Disable HTTP Authentication To remove the active HTTP authentication factory use the following command. Alternatively, you can use the following command to remove specific mechanisms from the active HTTP authentication factory. Additional resources Authorization Security Arguments 1.2.9. Configuring the management interfaces for one-way SSL/TLS with legacy core management authentication Configuring the JBoss EAP management interfaces for communication only using one-way SSL/TLS provides increased security. All network traffic between the client and the management interfaces is encrypted, which reduces the risk of security attacks such as a man-in-the-middle attack. In this procedure unencrypted communication with the JBoss EAP instance is disabled. This procedure applies to both standalone server and managed domain configurations. For a managed domain, prefix the management CLI commands with the name of the host, for example: /host=master . Important While performing the steps for enabling one-way SSL/TLS on the management interfaces, do not reload the configuration unless explicitly instructed. Doing so may cause you to be locked out of the management interfaces. Create a keystore to secure the management interfaces. For more information, see Creating a keystore to secure the management interfaces . Ensure the management interfaces bind to HTTPS. For more information, see Ensuring the management interfaces bind to HTTPS . Optional: Implement a custom socket-binding-group . For more information, see Custom socket-binding-group . Create a new security realm. For more information, see Creating a new security realm . Configure the management interfaces to use the new security realm. For more information, see Configuring the management interfaces to use a security realm . Configure the management interfaces to use the keystore. For more information, see Configuring the management interfaces to use a keystore . Update the jboss-cli.xml . For more information, see Updating the jboss-cli.xml file . 1.2.9.1. Creating a keystore to secure the management interfaces Create a keystore to secure the management interfaces. This keystore must be in JKS format as the management interfaces are not compatible with keystores in JCEKS format. Procedure Create a keystore using the following CLI command: Replace the example values for the parameters, for example alias , keypass , keystore , storepass and dname , with the correct values for the environment. USD keytool -genkeypair -alias appserver -storetype jks -keyalg RSA -keysize 2048 -keypass password1 -keystore EAP_HOME /standalone/configuration/identity.jks -storepass password1 -dname "CN=appserver,OU=Sales,O=Systems Inc,L=Raleigh,ST=NC,C=US" -validity 730 -v Note The parameter validity specifies for how many days the key is valid. A value of 730 equals two years. 1.2.9.2. Ensuring the management interfaces bind to HTTPS Configure JBoss EAP to ensure management interfaces bind to HTTPS. Procedure Configuration when running a Standalone Server To ensure the management interfaces bind to HTTPS, you must add the management-https configuration and remove the management-http configuration. Use the following CLI commands to bind the management interfaces to HTTPS: Configuration when running a Managed Domain Change the socket element within the management-interface attribute by adding secure-port and removing port configuration. Use the following commands to bind the management interfaces to HTTPS: 1.2.9.3. Custom socket-binding-group If you want to use a custom socket-binding-group , you must ensure the management-https binding is defined, which by default is bound to port 9993 . You can verify this from the socket-binding-group attribute of the server's configuration file or using the management CLI: 1.2.9.4. Creating a new security realm Create a new security realm. In this procedure, the new security realm using HTTPS, ManagementRealmHTTPS , uses a properties file named https-mgmt-users.properties located in the EAP_HOME /standalone/configuration/ directory for storing user names and passwords. Procedure Create a properties file for storing user name and passwords. User names and passwords can be added to the file later, but for now, you need to create an empty file named https-mgmt-users.properties and save it to that location. The below example shows using the touch command, but you may also use other mechanisms, such as a text editor. Example: Using the touch Command to Create an Empty File USD touch EAP_HOME /standalone/configuration/https-mgmt-users.properties , use the following management CLI commands to create a new security realm named ManagementRealmHTTPS : Add users to the properties file. At this point, you have created a new security realm and configured it to use a properties file for authentication. You must now add users to that properties file using the add-user script, which is available in the EAP_HOME /bin/ directory. When running the add-user script, you must specify both the properties file and the security realm using the -up and -r options respectively. From there, the add-user script will interactively prompt you for the user name and password information to store in the https-mgmt-users.properties file. USD EAP_HOME /bin/add-user.sh -up EAP_HOME /standalone/configuration/https-mgmt-users.properties -r ManagementRealmHTTPS ... Enter the details of the new user to add. Using realm 'ManagementRealmHTTPS' as specified on the command line. ... Username : httpUser Password requirements are listed below. To modify these restrictions edit the add-user.properties configuration file. - The password must not be one of the following restricted values {root, admin, administrator} - The password must contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s) - The password must be different from the username ... Password : Re-enter Password : About to add user 'httpUser' for realm 'ManagementRealmHTTPS' ... Is this correct yes/no? yes .. Added user 'httpUser' to file 'EAP_HOME/configuration/https-mgmt-users.properties' ... Is this new user going to be used for one AS process to connect to another AS process? e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls. yes/no? no Important When configuring security realms that use properties files to store usernames and passwords, it is recommended that each realm use a distinct properties file that is not shared with another realm. 1.2.9.5. Configuring the management interfaces to use a security realm You can configure the management interfaces to use a security realm by using a management CLI command. Procedure Use the following management CLI command: 1.2.9.6. Configuring the management interfaces to use a keystore Configure the management interfaces to use a keystore by using management CLI commands. Procedure Use the following management CLI command to configure the management interfaces to use the keystore. For the parameters file, password and alias their values must be copied from the Create a Keystore to Secure the Management Interfaces step. Note To update the keystore password, use the following CLI command: Reload the server's configuration: After reloading the server configuration, the log should contain the following, just before the text which states the number of services that are started: 13:50:54,160 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0061: Http management interface listening on https://127.0.0.1:9993/management 13:50:54,162 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0052: Admin console listening on https://127.0.0.1:9993 The management interfaces are now listening on port 9993 , which confirms that the procedure was successful. Important At this point, the CLI will disconnect and will be unable to reconnect since the port bindings have changed. Proceed to the step to update the jboss-cli.xml file to allow the management CLI to reconnect. 1.2.9.7. Updating the jboss-cli.xml file If using the management CLI to perform management actions, you must update the EAP_HOME /bin/jboss-cli.xml file. Procedure Update the EAP_HOME /bin/jboss-cli.xml file as following: Update the value of <default-protocol> to https-remoting . In <default-controller> , update the value of <protocol> to https-remoting . In <default-controller> , update the value of <port> to 9993 . Example: jboss-cli.xml <jboss-cli xmlns="urn:jboss:cli:2.0"> <default-protocol use-legacy-override="true">https-remoting</default-protocol> <!-- The default controller to connect to when 'connect' command is executed w/o arguments --> <default-controller> <protocol>https-remoting</protocol> <host>localhost</host> <port>9993</port> </default-controller> ... The time you connect to the management interface using the management CLI, you must accept the server certificate and authenticate against the ManagementRealmHTTPS security realm: Example: Accepting Server Certificate and Authenticating USD ./jboss-cli.sh -c Unable to connect due to unrecognised server certificate Subject - CN=appserver,OU=Sales,O=Systems Inc,L=Raleigh,ST=NC,C=US Issuer - CN=appserver, OU=Sales, O=Systems Inc, L=Raleigh, ST=NC, C=US Valid From - Tue Jun 28 13:38:48 CDT 2016 Valid To - Thu Jun 28 13:38:48 CDT 2018 MD5 : 76:f4:81:8b:7e:c3:be:6d:ee:63:c1:7a:b7:b8:f0:fb SHA1 : ea:e3:f1:eb:53:90:69:d0:c9:69:4a:5a:a3:20:8f:76:c1:e6:66:b6 Accept certificate? [N]o, [T]emporarily, [P]ermenantly : p Authenticating against security realm: ManagementRealmHTTPS Username: httpUser Password: [standalone@localhost:9993 /] Important In cases where you have both a security-realm and ssl-context defined, JBoss EAP will use the SSL/TLS configuration provided by ssl-context . 1.2.10. Setting up two-way SSL/TLS for the management interfaces with legacy core management authentication Two-way SSL/TLS authentication, also known as client authentication , authenticates both the client and the server using SSL/TLS certificates. This differs from the Configure the Management Interfaces for One-way SSL/TLS section in that both the client and server each have a certificate. This provides assurance that not only is the server who it says it is, but the client is also who it says it is. In this section the following conventions are used: HOST1 The JBoss server hostname. For example: jboss.redhat.com . HOST2 A suitable name for the client. For example: myclient . Note this is not necessarily an actual hostname. CA_HOST1 The DN (distinguished name) to use for the HOST1 certificate. For example: cn=jboss,dc=redhat,dc=com . CA_HOST2 The DN (distinguished name) to use for the HOST2 certificate. For example: cn=myclient,dc=redhat,dc=com . Note If a password vault is used to store the keystore and truststore passwords, which is recommended, the password vault should already be created. For more information on the password vault, see the Password Vault section as well as the Password Vault System section of the Red Hat JBoss Enterprise Application Platform 7 Security Architecture guide. Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. Procedure Generate the keystores. USD keytool -genkeypair -alias HOST1_alias -keyalg RSA -keysize 1024 -validity 365 -keystore HOST1.keystore.jks -dname "CA_HOST1" -keypass secret -storepass secret USD keytool -genkeypair -alias HOST2_alias -keyalg RSA -keysize 1024 -validity 365 -keystore HOST2.keystore.jks -dname "CA_HOST2" -keypass secret -storepass secret Export the certificates. USD keytool -exportcert -keystore HOST1.keystore.jks -alias HOST1_alias -keypass secret -storepass secret -file HOST1.cer USD keytool -exportcert -keystore HOST2.keystore.jks -alias HOST2_alias -keypass secret -storepass secret -file HOST2.cer Import the certificates into the opposing truststores. USD keytool -importcert -keystore HOST1.truststore.jks -storepass secret -alias HOST2_alias -trustcacerts -file HOST2.cer USD keytool -importcert -keystore HOST2.truststore.jks -storepass secret -alias HOST1_alias -trustcacerts -file HOST1.cer Define a CertificateRealm. Define a CertificateRealm in the configuration for the server ( host.xml or standalone.xml ) and point the interface to it. This can be done using the following commands: Change the security-realm of the http-interface to the new CertificateRealm. Add the SSL/TLS configuration for the CLI. Important In addition to adding the two-way SSL/TLS, the management interface should also be configured to bind to HTTPS. For details, see Ensure the Management Interfaces Bind to HTTPS in the section entitled Configure the Management Interfaces for One-way SSL/TLS with Legacy Core Management Authentication . Add the SSL/TLS configuration for the CLI, which uses EAP_HOME /bin/jboss-cli.xml as a settings file. To store the keystore and truststore passwords in plain text, edit EAP_HOME /bin/jboss-cli.xml and add the SSL/TLS configuration using the appropriate values for the variables: Example: jboss-cli.xml Storing Keystore and Truststore Passwords in Plain Text <ssl> <alias>HOST2_alias</alias> <key-store>/path/to/HOST2.keystore.jks</key-store> <key-store-password>secret</key-store-password> <trust-store>/path/to/HOST2.truststore.jks</trust-store> <trust-store-password>secret</trust-store-password> <modify-trust-store>true</modify-trust-store> </ssl> To use the keystore and truststore passwords stored in a password vault, you need to add the vault configuration and appropriate vault values to EAP_HOME /bin/jboss-cli.xml : Example: jboss-cli.xml Storing Keystore and Truststore Passwords in a Password Vault <ssl> <vault> <vault-option name="KEYSTORE_URL" value="path-to/vault/vault.keystore"/> <vault-option name="KEYSTORE_PASSWORD" value="MASK-5WNXs8oEbrs"/> <vault-option name="KEYSTORE_ALIAS" value="vault"/> <vault-option name="SALT" value="12345678"/> <vault-option name="ITERATION_COUNT" value="50"/> <vault-option name="ENC_FILE_DIR" value="EAP_HOME/vault/"/> </vault> <alias>HOST2_alias</alias> <key-store>/path/to/HOST2.keystore.jks</key-store> <key-store-password>VAULT::VB::cli_pass::1</key-store-password> <key-password>VAULT::VB::cli_pass::1</key-password> <trust-store>/path/to/HOST2.truststore.jks</trust-store> <trust-store-password>VAULT::VB::cli_pass::1</trust-store-password> <modify-trust-store>true</modify-trust-store> </ssl> Important In cases where you have both a security-realm and ssl-context defined, JBoss EAP will use the SSL/TLS configuration provided by ssl-context . 1.2.11. HTTPS Listener Reference For a full list of attributes available for the HTTPS listener, see the Undertow Subsystem Attributes section in the JBoss EAP Configuration Guide . 1.2.11.1. About Cipher Suites You can configure a list of the encryption ciphers which are allowed. For JSSE syntax, it must be a comma-separated list. For OpenSSL syntax, it must be a colon-separated list. Ensure that only one syntax is used. The default is the JVM default. Important Using weak ciphers is a significant security risk. See NIST Guidelines for NIST recommendations on cipher suites. See the OpenSSL documentation for a list of available OpenSSL ciphers . Note that the following are not supported: @SECLEVEL SUITEB128 SUITEB128ONLY SUITEB192 See the Java documentation for a list of the standard JSSE ciphers . To update the list of enabled cipher suites, use the enabled-cipher-suites attribute of the HTTPS listener in the undertow subsystem. Example: Management CLI Command for Updating the List of Enabled Cipher Suites Note The example only lists two possible ciphers, but real-world examples will likely use more. 1.2.12. Enabling support for the TLS 1.3 protocol with the OpenSSL provider You can enable support for the TLS 1.3 protocol with the OpenSSL provider for TLS by configuring the cipher-suite-names attribute in the ssl-context configuration. Choose one of the following methods for configuring JBoss EAP to use the OpenSSL TLS provider: Configure the Elytron subsystem to use the OpenSSL TLS provider by default. Configure the providers attribute of a server-ssl-context component or a client-ssl-context component to use the OpenSSL TLS provider. Important Compared with TLS 1.2, you might experience reduced performance when running TLS 1.3 with JDK 11. This can occur when clients make a very large number of TLS 1.3 requests to a server. A system upgrade to a newer JDK version can improve performance. Test your setup with TLS 1.3 for performance degradation before enabling it in production. Prerequisites Enable one-way SSL/TLS or two-way SSL/TLS for applications. Procedure Choose one of the following methods to configure your JBoss EAP 7.4 instance to use the OpenSSL TLS provider: Configure the elytron subsystem to use the OpenSSL TLS provider by default. To do this, remove the default final-providers configuration, which registers the OpenSSL TLS provider after all globally registered providers. , register the OpenSSL TLS provider ahead of all globally registered providers. Configure the providers attribute of a server-ssl-context or a client-ssl-context to use the OpenSSL TLS provider. Example of setting the providers attribute for an existing server-ssl-context called serverSSC . Optional: If you configured your ssl-context to use a protocol other than the TLS 1.3 protocol, you must configure the protocols attribute in the ssl-context to include the TLS 1.3 protocol: Enable support for the TLS 1.3 protocol with the OpenSSL provider by configuring the cipher-suite-names attribute in the ssl-context configuration. The following example sets TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256 as the value for the cipher-suite-names attribute: Reload your JBoss EAP instance: Optional: Test that you can successfully establish an SSL-encrypted connection with the server by using the TLS 1.3 protocol and the TLS 1.3 cipher suite. Use a tool, such as curl , to check the output of the configuration: Example output showing TLS_AES_256_GCM_SHA384 with the TLS 1.3 protocol to secure the SSL connection. SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use h2 * Server certificate: * subject: C=Unknown; ST=Unknown; L=Unknown; O=Unknown; OU=Unknown; CN=localhost * start date: Oct 6 14:58:16 2020 GMT * expire date: Nov 5 15:58:16 2020 GMT * issuer: C=Unknown; ST=Unknown; L=Unknown; O=Unknown; OU=Unknown; CN=localhost * SSL certificate verify result: self signed certificate (18), continuing anyway. Additional resources For information about enabling one-way SSL/TLS or two-way SSL/TLS for applications, see Enable One-way SSL/TLS for Applications Using the Elytron Subsystem . For information about the client-ssl-context , see Using a client-ssl-context . For information about the server-ssl-context , see Using a server-ssl-context . 1.2.13. FIPS 140-2 Compliant Cryptography It is possible to configure FIPS 140-2 compliant cryptography on Red Hat Enterprise Linux using either of the following methods. Using the SunPKCS11 provider with an NSS database Using the third party BouncyCastle providers 1.2.13.1. Enable FIPS 140-2 Cryptography for SSL/TLS on Red Hat Enterprise Linux 7 and Later You can configure Undertow to use FIPS 140-2 compliant cryptography for SSL/TLS. The scope of this configuration example is limited to Red Hat Enterprise Linux 7 and later, using the Mozilla NSS library in FIPS mode. Important The installed Red Hat Enterprise Linux must already be configured to be FIPS 140-2 compliant. For more information, see the solution titled How can I make RHEL 6 or RHEL 7 FIPS 140-2 compliant? , which is located on the Red Hat Customer Portal. Warning Using the TLS 1.2 protocol when running JBoss EAP in FIPS mode can cause a NoSuchAlgorithmException to occur. More details on this issue can be found in the solution titled NoSuchAlgorithmException: no such algorithm: SunTls12MasterSecret , which is located on the Red Hat Customer Portal. Therefore, it is not possible to configure HTTP/2 in FIPS mode because HTTP/2 requires the TLS 1.2 protocol. FIPS mode (PKCS11) supports the TLS 1 and the TLS 1.1 protocols so you can use: TLS 1.1 in case of Oracle/OpenJDK TLS 1 in case of IBM java To configure Undertow to use FIPS 140-2 compliant cryptography for SSL/TLS, you must do the following: Configure the NSS database . Configure the management CLI for FIPS 140-2 compliant cryptography for SSL/TLS . Configure the undertow subsystem to use either Elytron or the legacy core management authentication . Note The OpenSSL provider requires a private key, but it is not possible to retrieve a private key from the PKCS11 store. FIPS does not allow the export of unencrypted keys from FIPS compliant cryptographic module. Therefore, for both the elytron subsystem as well as legacy security, it is not possible to use the OpenSSL provider for TLS when in FIPS mode. Configuring the NSS database Create a directory owned by the appropriate user to house the NSS database. Example Commands for Creating the NSS Database Directory USD mkdir -p /usr/share/jboss-as/nssdb USD chown jboss /usr/share/jboss-as/nssdb USD modutil -create -dbdir /usr/share/jboss-as/nssdb Note DBM file format, the default database format in RHEL 7 and earlier, has been deprecated. NSS now uses SQL by default . The jboss user is only an example. Replace it with an active user on your operating system to run JBoss EAP. Create the NSS configuration file: /usr/share/jboss-as/nss_pkcsll_fips.cfg . It must specify: a name the directory where the NSS library is located the directory where the NSS database was created in the step Example: nss_pkcsll_fips.cfg Note If you are not running a 64-bit version of Red Hat Enterprise Linux 6 then set nssLibraryDirectory to /usr/lib instead of /usr/lib64 . Edit the Java security configuration file. This configuration file affects the entire JVM, and can be defined using either of the following methods. A default configuration file, java.security , is provided in your JDK. This file is used if no other security configuration files are specified. See the JDK vendor's documentation for the location of this file. Define a custom Java security configuration file and reference it by using the -Djava.security.properties= /path/to/ java.security.properties . When referenced in this manner it overrides the settings in the default security file. This option is useful when having multiple JVMs running on the same host that require different security settings. Add the following line to your Java security configuration file: Example: java.security Note The nss_pkcsll_fips.cfg configuration file specified in the above line is the file created in the step. You also need to update the following link in your configuration file from: to Important Any other security.provider.X lines in this file, for example security.provider.2 , must have the value of their X increased by one to ensure that this provider is given priority. Run the modutil command on the NSS database directory you created in the step to enable FIPS mode. Note You may get a security library error at this point requiring you to regenerate the library signatures for some of the NSS shared objects. Set the password on the FIPS token. The name of the token must be NSS FIPS 140-2 Certificate DB . Important The password used for the FIPS token must be a FIPS compliant password. If the password is not strong enough, you may receive an error: ERROR: Unable to change password on token "NSS FIPS 140-2 Certificate DB". Create a certificate using the NSS tools. Example Command USD certutil -S -k rsa -n undertow -t "u,u,u" -x -s "CN=localhost, OU=MYOU, O=MYORG, L=MYCITY, ST=MYSTATE, C=MY" -d /usr/share/jboss-as/nssdb Verify that the JVM can read the private key from the PKCS11 keystore by running the following command: Important Once you have FIPS enabled, you may see the following error when starting JBoss EAP: This message will appear if you have any existing key managers configured, such as the default key manager in legacy core management authentication, that do not use FIPS 140-2 compliant cryptography. Configure the Management CLI for FIPS 140-2 Compliant Cryptography for SSL/TLS You must configure the JBoss EAP management CLI to work in an environment with FIPS 140-2 compliant cryptography for SSL/TLS enabled. By default, if you try to use the management CLI in such an environment, the following exception is thrown: org.jboss.as.cli.CliInitializationException: java.security.KeyManagementException: FIPS mode: only SunJSSE TrustManagers may be used . If you are using the legacy security subsystem: Update the javax.net.ssl.keyStore and javax.net.ssl.trustStore system properties in the jboss-cli.sh file, as shown below: If you are using the elytron subsystem: Create an XML configuration file for the management CLI with the following contents: Example: cli-wildfly-config.xml <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="truststore" type="PKCS11"> <key-store-clear-password password="P@ssword123"/> </key-store> </key-stores> <ssl-contexts> <ssl-context name="client-cli-context"> <trust-store key-store-name="truststore"/> <cipher-suite selector="USD{cipher.suite.filter}"/> <protocol names="TLSv1.1"/> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="client-cli-context"/> </ssl-context-rules> </authentication-client> </configuration> Note If you are using the IBM JDK, see the IBM management CLI configuration example for the specific configuration required. When starting the management CLI, pass the configuration file to the management CLI script using the -Dwildfly.config.url property. For example: Configure the Elytron and Undertow Subsystems Add the FIPS 140-2 compliant cryptography key-store , key-manager and ssl-context . Update the undertow subsystem to use the new ssl-context . Note https-listener must always have either a security-realm or ssl-context configured. When changing between the two configurations, the commands must be executed as a single batch, as shown below. In the elytron subsystem, OpenJDK and Oracle JDK in FIPS mode restrict the usage of any advanced features that are based on providing custom KeyManager or TrustManager implementations. The following configuration attributes do not work on the server: server-ssl-context.security-domain trust-manager.certificate-revocation-list Configure Undertow with the Legacy Core Management Authentication Optionally, you can still use the legacy core management authentication instead of the elytron subsystem to complete the setup of FIPS 140-2 compliant cryptography for SSL/TLS: Configure Undertow to use SSL/TLS. Note The following commands below must either be run in batch mode, or the server must be reloaded after adding the ssl server identity. The example below is shown using batch mode. The basic details for configuring Undertow to SSL/TLS are covered in Setting up an SSL/TLS for Applications . Configure the cipher suites used by Undertow. Once you have SSL/TLS configured, you need to configure the https listener and security realm to have a specific set of cipher suites enabled: Required Cipher Suites The basics behind enabling cipher suites for the https listener are covered in About Cipher Suites . To enable cipher suites on the https listener: Example Command for Enabling Cipher Suites on the Https Listener Enable cipher suites on the security realm. Example Command for Enabling Cipher Suites on the Security Realm 1.2.13.2. Enable FIPS 140-2 Cryptography for SSL/TLS Using Bouncy Castle You can configure Undertow to use FIPS 140-2 compliant cryptography for SSL/TLS. The scope of this configuration example is limited to Red Hat Enterprise Linux 7 and later. The Bouncy Castle JARs are not provided by Red Hat, and must be obtained directly from Bouncy Castle. Prerequisites Ensure your environment is configured to use the BouncyCastle provider . A Bouncy Castle keystore must exist on the server. If one does not exist, it can be created using the following command. Configure the Management CLI for FIPS 140-2 Compliant Cryptography for SSL/TLS Using Elytron You must configure the JBoss EAP management CLI to work in an environment with FIPS 140-2 compliant cryptography for SSL/TLS enabled. Create an XML configuration file for the management CLI with the following contents: Example: cli-wildfly-config.xml <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="truststore" type="BCFKS"> <file name="USD{truststore.location}" /> <key-store-clear-password password="USD{password}" /> </key-store> <key-store name="keystore" type="BCFKS"> <file name="USD{keystore.location}" /> <key-store-clear-password password="USD{password}" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name="client-cli-context"> <key-store-ssl-certificate algorithm="PKIX" key-store-name="keystore"> <key-store-clear-password password="USD{password"} /> </key-store-ssl-certificate> <trust-store key-store-name="truststore"/> <trust-manager algorithm="PKIX"> </trust-manager> <cipher-suite selector="TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,TLS_DHE_DSS_WITH_AES_256_CBC_SHA,TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CCM,TLS_RSA_WITH_AES_128_CCM"/> <protocol names="TLSv1.2"/> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="client-cli-context"/> </ssl-context-rules> </authentication-client> </configuration> When starting the management CLI, pass the configuration file to the management CLI script using the -Dwildfly.config.url property. For example: Configure the Elytron and Undertow Subsystems Add the FIPS 140-2 compliant cryptography key-store , key-manager and ssl-context . When defining the keystore, the type must be BCFKS . Update the undertow subsystem to use the new ssl-context . Note https-listener must always have either a security-realm or ssl-context configured. When changing between the two configurations, the commands must be executed as a single batch, as shown below. 1.2.14. FIPS 140-2 Compliant Cryptography on IBM JDK On the IBM JDK, the IBM Java Cryptographic Extension (JCE) IBMJCEFIPS provider and the IBM Java Secure Sockets Extension (JSSE) FIPS 140-2 Cryptographic Module (IBMJSSE2) for multi-platforms provide FIPS 140-2 compliant cryptography. For more information on the IBMJCEFIPS provider, see the IBM Documentation for IBM JCEFIPS and NIST IBMJCEFIPS - Security Policy . For more information on IBMJSSE2, see Running IBMJSSE2 in FIPS mode . 1.2.14.1. Key Storage The IBM JCE does not provide a keystore. The keys are stored on the computer and do not leave its physical boundary. If the keys are moved between computers they must be encrypted. To run keytool in FIPS-compliant mode use the -providerClass option on each command like this: 1.2.14.2. Management CLI Configuration To configure the management CLI for FIPS 140-2 compliant cryptography on the IBM JDK, you must use a management CLI configuration file specifically for the IBM JDK, such as the following: Example: cli-wildfly-config-ibm.xml <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <key-stores> <key-store name="truststore" type="JKS"> <file name="/path/to/truststore"/> <key-store-clear-password password="P@ssword123"/> </key-store> </key-stores> <ssl-contexts> <ssl-context name="client-cli-context"> <trust-store key-store-name="truststore"/> <cipher-suite selector="USD{cipher.suite.filter}"/> <protocol names="TLSv1"/> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="client-cli-context"/> </ssl-context-rules> </authentication-client> </configuration> 1.2.14.3. Examine FIPS Provider Information To examine information about the IBMJCEFIPS used by the server, enable debug-level logging by adding -Djavax.net.debug=true to the standalone.conf or domain.conf files. Information about the FIPS provider is logged to the server.log file, for example: 1.2.15. Starting a Managed Domain when the JVM is Running in FIPS Mode Update each host controller and the master domain controller to use SSL/TLS for communication. Prerequisites Before you begin, make sure you have completed the following prerequisites. You have implemented a managed domain. For details about configuring a managed domain, see the Domain Management section in the JBoss EAP Configuration Guide . You have configured FIPS. For details about configuring FIPS, see Enable FIPS 140-2 Cryptography for SSL/TLS on Red Hat Enterprise Linux 7 and later . You have created all necessary certificates and have imported the domain controller's certificate into each controller's truststore. Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 in all affected packages. On the master domain controller, create an SSL/TLS security realm that is configured to use your NSS database as a PKCS11 provider.. Example: Security Realm on the Master Domain Controller <security-realm name="HTTPSRealm"> <server-identities> <ssl> <engine enabled-protocols="TLSv1.1"/> <keystore provider="PKCS11" keystore-password="strongP@ssword1"/> </ssl> </server-identities> <authentication> <local default-user="\USDlocal"/> <properties path="https-users.properties" relative-to="jboss.domain.config.dir"/> </authentication> </security-realm> On each host controller, create a security realm with an SSL/TLS truststore for authentication. Example: Security Realm on Each Host Controller <security-realm name="HTTPSRealm"> <authentication> <truststore provider="PKCS11" keystore-password="strongP@ssword1"/> </authentication> </security-realm> Note Repeat this process on each host. Secure the HTTP interface on the master domain controller with the security realm you just created. Example: HTTP Interface <management-interfaces> <http-interface security-realm="HTTPSRealm"> <http-upgrade enabled="true"/> <socket interface="management" port="USD{jboss.management.http.port:9990}"/> </http-interface> </management-interfaces> Use the SSL/TLS realm on each host controller to connect to the master domain controller. Update the security realm used for connecting to the master domain controller. Modify the host controller's configuration file, for example host.xml or host-slave.xml , while the server is not running. Example: Host Controller Configuration File <domain-controller> <remote security-realm="HTTPSRealm"> <discovery-options> <static-discovery name="primary" protocol="USD{jboss.domain.master.protocol:remote}" host="USD{jboss.domain.master.address}" port="USD{jboss.domain.master.port:9990}"/> </discovery-options> </remote> </domain-controller> Update how each server connects back to its host controller. Example: Server Configuration <server name="my-server" group="my-server-group"> <ssl ssl-protocol="TLS" trust-manager-algorithm="PKIX" truststore-type="PKCS11" truststore-password="strongP@ssword1"/> </server> Configure two-way SSL/TLS in a managed domain. To enable two-way SSL/TLS, add a truststore authentication method to the SSL/TLS security realm for the master domain controller, execute the following management CLI commands: You also need to update each host controller's security realm to have an SSL server identity, execute the following management CLI commands: Important You also need to ensure that each host's certificate is imported into the domain controller's truststore. 1.2.16. Secure the Management Console with Red Hat Single Sign-On You can secure the JBoss EAP management console with Red Hat Single Sign-On using the elytron subsystem. Note This feature is only available when running a standalone server and is not supported when running a managed domain. It is not supported to use Red Hat Single Sign-On to secure the management CLI. Use the following steps to set up Red Hat Single Sign-On to authenticate users for the JBoss EAP management console. Configure a Red Hat Single Sign-On server for JBoss EAP management . Install the Red Hat Single Sign-On client adapter on JBoss EAP . Configure JBoss EAP to use Red Hat Single Sign-On . Configure a Red Hat Single Sign-On Server for JBoss EAP Management Download and install a Red Hat Single Sign-On server. See the Red Hat Single Sign-On Getting Started Guide for basic instructions. Start the Red Hat Single Sign-On server. This procedure assumes that you started the server with a port offset of 100 . Log in to the Red Hat Single Sign-On administration console at http://localhost:8180/auth/ . If this is the first time you have accessed the Red Hat Single Sign-On administration console, you are prompted to create an initial administration user. Create a new realm called wildfly-infra . From the drop down to the realm name, click Add realm , enter wildfly-infra in the Name field, and click Create . Create a client application called wildfly-console . Important The name of this client application must be wildfly-console . Select Clients and click Create . Enter wildfly-console in the Client ID field and click Save . In the Settings screen that appears, set Access Type to public , Valid Redirect URIs to http://localhost:9990/console/* , Web Origins to http://localhost:9990 , and click Save . Create a client application called wildfly-management . Select Clients and click Create . Enter wildfly-management in the Client ID field and click Save . In the Settings screen that appears, set Access Type to bearer-only and click Save . Create a role to grant access to the JBoss EAP management console. Select Roles and click Add Role . Enter ADMINISTRATOR in uppercase in the Role Name field and click Save . This procedure uses the ADMINISTRATOR role, but other roles are supported. For more information, see the Role-Based Access Control section of JBoss EAP's Security Architecture . Create a user and assign the ADMINISTRATOR role to them. Select Users and click Add user . Enter jboss in the Username field and click Save . Select the Credentials tab and set a password for this user. Select the Role Mappings tab, select ADMINISTRATOR and click Add selected to add the role to this user. Install the Red Hat Single Sign-On Client Adapter on JBoss EAP Download the Red Hat Single Sign-On client adapter for JBoss EAP 7 from the software downloads page . Unzip this file into the root directory of your JBoss EAP installation. Execute the adapter-elytron-install-offline.cli script to configure your JBoss EAP installation. Important This script adds the keycloak subsystem and other required resources in the elytron and undertow subsystems to standalone.xml . If you need to use a different configuration file, modify the script as needed. Configure JBoss EAP to Use Red Hat Single Sign-On In the EAP_HOME /bin/ directory, create a file called protect-eap-mgmt-services.cli with the following contents. # Create a realm for both JBoss EAP console and mgmt interface /subsystem=keycloak/realm=wildfly-infra:add(auth-server-url=http://localhost:8180/auth,realm-public-key= REALM_PUBLIC_KEY ) # Create a secure-deployment in order to protect mgmt interface /subsystem=keycloak/secure-deployment=wildfly-management:add(realm=wildfly-infra,resource=wildfly-management,principal-attribute=preferred_username,bearer-only=true,ssl-required=EXTERNAL) # Protect HTTP mgmt interface with Keycloak adapter /core-service=management/management-interface=http-interface:undefine-attribute(name=security-realm) /subsystem=elytron/http-authentication-factory=keycloak-mgmt-http-authentication:add(security-domain=KeycloakDomain,http-server-mechanism-factory=wildfly-management,mechanism-configurations=[{mechanism-name=KEYCLOAK,mechanism-realm-configurations=[{realm-name=KeycloakOIDCRealm,realm-mapper=keycloak-oidc-realm-mapper}]}]) /core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory,value=keycloak-mgmt-http-authentication) /core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade, value={enabled=true, sasl-authentication-factory=management-sasl-authentication}) # Enable RBAC where roles are obtained from the identity /core-service=management/access=authorization:write-attribute(name=provider,value=rbac) /core-service=management/access=authorization:write-attribute(name=use-identity-roles,value=true) # Create a secure-server in order to publish the JBoss EAP console configuration via mgmt interface /subsystem=keycloak/secure-server=wildfly-console:add(realm=wildfly-infra,resource=wildfly-console,public-client=true) # reload reload In this file, replace REALM_PUBLIC_KEY with the value of the public key. To obtain this value, log in to the Red Hat Single Sign-On administration console, select the wildfly-infra realm, navigate to Realm Settings Keys and click Public key . Start JBoss EAP. Important If you modified the adapter-elytron-install-offline.cli script when installing the Red Hat Single Sign-On client adapter to use a configuration file other than standalone.xml , be sure to start the JBoss EAP using that configuration. Execute the protect-eap-mgmt-services.cli script. Now, when you access the JBoss EAP management console at http://localhost:9990/console/ , you are redirected to Red Hat Single Sign-On to log in, and then redirected back to the JBoss EAP management console upon successful authentication. 1.3. Configuring security auditing for a legacy security domain You can use an audit module to monitor the events in the security subsystem. Auditing uses provider modules, custom implementations, or both to monitor events. After monitoring events, the audit module writes a log file, reads email notifications, or uses other measurable auditing mechanisms. Use the management console to configure security auditing settings for a security domain. Procedure Click on the Configuration tab. Navigate to Subsystems Security (Legacy) . Select an editable security domain and click View . Select the Audit tab and press Add to add a new audit module. Set a name for the module and fill in the Code field with the class name of the provider module. Optional: Add module options by editing the module and adding key/value pairs in the Module Options field. Press Enter to add a new value and press Backspace to remove an existing value. 1.4. Security auditing with Elytron You can use Elytron to complete security audits on triggering events. Security auditing refers to triggering events, such as writing to a log, in response to an authorization or authentication attempt. The type of security audit performed on events depends on your security realm configuration. 1.4.1. Elytron audit logging After you enable audit logging with the elytron subsystem, you can log Elytron authentication and authorization events within the application server. Elytron stores audit log entries in either JSON for storing individual events or SIMPLE for human readable text format. Elytron audit logging differs from other types of audit logging, such as audit logging for the JBoss EAP management interfaces. Elytron disables audit logging by default. You can enable audit logging by configuring any of the following log handlers for Elytron. You can add a log handler to a security domain. File audit logging Periodic rotating file audit logging Size rotating file audit logging syslog audit logging Custom audit logging You can use the aggregate-security-event-listener resource to send security events to more destinations, such as loggers. The aggregate-security-event-listener resource delivers all events to all listeners specified in the aggregate listener definition. You can use an audit module to monitor events for a legacy security domain. You can use the management console to configure security auditing settings for a legacy security domain. Additional resources For information about configuring auditing with the legacy security system, see Configuring security auditing for a legacy security domain . For more information about management interface audit logging options, see Management audit logging in the Configuration Guide . For more information about file audit logging, see Enabling file audit logging . For more information about periodic rotating file audit logging, see Periodic Rotating File Audit Logging . For more information about size rotating file audit logging, see Size rotating file audit logging . For more information about syslog audit logging, see syslog audit logging . For more information about custom audit logging, see Using custom security event listeners in Elytron . 1.4.2. Enabling file audit logging You can use the elytron subsystem to enable file audit logging for your standalone server or a server in a managed domain. File audit logging stores audit log messages in a single file within your file system. By default, Elytron specifies local-audit as the file audit logger. You must enable local-audit so that it can write Elytron audit logs to EAP_HOME/standalone/log/audit.log on a standalone server or EAP_HOME/domain/log/audit.log for a managed domain. Procedure Create a file audit log. Example of creating a file audit log by using the elytron subsystem: Add the file audit log to a security domain. Example command adding file audit log to a security domain Additional resources For more information about file audit logger attributes, see File audit logger attributes . 1.4.3. Enabling periodic rotating file audit logging You can use the elytron subsystem to enable periodic rotating file audit logging for your standalone server or a server in a domain domain. Periodic rotating file audit logging automatically rotates audit log files based on your configured schedule. Periodic rotating file audit logging is similar to default file audit logger, but periodic rotating file audit logging contains an additional attribute: suffix . The value of the suffix attribute is a date specified using the java.time.format.DateTimeFormatter format, such as .yyyy-MM-dd . Elytron automatically calculates the period of the rotation from the value provided with the suffix. The elytron subsystem appends the suffix to the end of a log file name. Procedure Create a periodic rotating file audit log. Example of creating periodic rotating file audit log in the elytron subsystem Add the periodic rotating file audit logger to a security domain. Example adding a periodic rotating file audit logger to a security domain Additional resources For information about periodic rotating file audit logger attributes, see the periodic-rotating-file-audit-log Attributes table. 1.4.4. Enabling size rotating file audit logging You can use the elytron subsystem to enable size rotating file audit logging for your standalone server or a server in a domain managed. Size rotating file audit logging automatically rotates audit log files when the log file reaches a configured file size. Size rotating file audit logging is similar to default file audit logger, but the size rotating file audit logging contains additional attributes. When the log file size exceeds the limit defined by the rotate-size attribute, Elytron appends the suffix .1 to the end of the current file, and Elytron creates a new log file. Elytron increments a suffix by one for existing log files. For example, Elytron renames audit_log.1 to audit_log.2 . Elytron continues the increments until a log file amount reaches the maximum number of log files, defined by max-backup-index . When a log files exceed the max-backup-index value, Elytron removes the file, for example audit_log.99 , that is the file that is over limit. Procedure Create a size rotating file audit log. Example of creating a size rotating file audit log by using the elytron subsystem: Add the size rotating audit logger to a security domain. Example of enabling a size rotating file audit log by using the elytron subsystem: Additional resources For information about size rotating file audit logging attributes, see the Size rotating file audit logging attributes table. 1.4.5. Enabling syslog audit logging You can use the elytron subsystem to enable syslog audit logging for your standalone server or a server in a domain managed. When you use syslog audit logging, you send the logging results to a syslog server, which provides more security options than logging to a local file. The syslog handler specify parameters used to connect to a syslog server, such as the syslog server's host name and the port on which the syslog server listens. You can define multiple syslog handlers and activate them simultaneously. Supported log formats include RFC5424 and RFC3164 . Supported transmission protocols include UDP, TCP, and TCP with SSL. When you define a syslog for the first instance, the logger sends an INFORMATIONAL priority event to syslog server containing the message as demonstrated in the following example: <format> refers to the RFC format configured for the audit logging handler, which defaults to RFC5424 value Procedure Add a syslog handler. Example of adding a syslog handler by using the elytron subsystem: You can also send logs to a syslog server over TLS: Example syslog configuration to send logs over TLS Add the syslog audit logger to a security domain. Example of adding a syslog audit logger to a security domain Additional resources For information about syslog-audit-log attributes, see the syslog-audit-log Attributes table. For more information enabling support for TLS by setting the ssl-context configuration, see Using a client-ssl-context . For more information about RFC5424 , see The Syslog Protocol . For more information about RFC3164 , see The BSD syslog Protocol . 1.4.6. Using custom security event listeners in Elytron You can use Elytron to define a custom event listener. A custom event listener manages the processing incoming security events. You can use the event listener for custom audit logging purposes, or you can use the event listener to authenticate users against your internal identity storage. Important Using the module management CLI command to add and remove modules is provided as a Technology Preview feature only. The module command is not appropriate for use in a managed domain or when connecting with a remote management CLI. You must manually add add or remove modules in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Procedure Create a class that implements the java.util.function.Consumer<org.wildfly.security.auth.server.event.SecurityEvent> interface. For example, the following prints a message whenever a user succeeds or fails authentication. Example of creating a Java class that uses the specified interface: The Java class in the example prints a message whenever a user succeeds or fails authentication. Add the JAR that provides the custom event listener as a module to JBoss EAP, The following is an example of the management CLI command that adds a custom event listener as a module to Elytron. Example of using the module command to add a custom event listener as a module to Elytron: Reference the custom event listener in the security domain. Example of referencing a custom event listener in ApplicationDomain : Restart the server. The event listener receives security events from the specified security domain. Additional resources For information about manually adding or removing modules in a production environment, see Create a Custom Module Manually and Remove a Custom Module Manually in the Configuration Guide . For information about adding a custom event listener as a module, see Add a Custom Component to Elytron . 1.5. Configure One-way and Two-way SSL/TLS for Applications 1.5.1. Automatic Self-signed Certificate Creation for Applications When using the legacy security realms, JBoss EAP provides automatic generation of self-signed certificate for development purposes. Example: Server Log Showing Self-signed Certificate Creation This certificate is created for testing purposes and is assigned to the HTTPS interface used by applications. The keystore containing the certificate will be generated if the file does not exist the first time the HTTPS interface is accessed. Example: Default ApplicationRealm Using the Self-signed Certificate <security-realm name="ApplicationRealm"> <server-identities> <ssl> <keystore path="application.keystore" relative-to="jboss.server.config.dir" keystore-password="password" alias="server" key-password="password" generate-self-signed-certificate-host="localhost"/> </ssl> </server-identities> ... </security-realm> Example: Default HTTPS Interface Configuration <subsystem xmlns="urn:jboss:domain:undertow:10.0"> ... <server name="default-server"> ... <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> ... Note If you want to disable the self-signed certificate creation, you will need to remove the generate-self-signed-certificate-host="localhost" from the server keystore configuration. The generate-self-signed-certificate-host attribute holds the host name for which the self-signed certificate should be generated. Warning This self-signed certificate is intended for testing purposes only and is not intended for use in production environments. For more information on configuring SSL/TLS for applications with Elytron, see the Enable One-way SSL/TLS for Applications using the Elytron Subsystem and Enable Two-way SSL/TLS for Applications using the Elytron Subsystem sections. For more information on configuring SSL/TLS for applications with legacy security, see the Enable One-way SSL/TLS for Applications Using Legacy Security Realms and Enable Two-way SSL/TLS for Applications Using Legacy Security Realms sections. 1.5.2. Using Elytron 1.5.2.1. Enable One-way SSL/TLS for Applications Using the Elytron Subsystem In JBoss EAP, you can enable one-way SSL/TLS for the for deployed applications using the JBoss EAP management CLI or the management console. In the management CLI, one-way SSL/TLS can be enabled in two ways: Using security command . Using elytron subsystem commands . Using a Security Command The security enable-ssl-http-server command can be used to enable one-way SSL/TLS for deployed applications. Example: Wizard Usage security enable-ssl-http-server --interactive Please provide required pieces of information to enable SSL: Key-store file name (default default-server.keystore): keystore.jks Password (blank generated): secret What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct y/n [y]? Validity (in days, blank default): 365 Alias (blank generated): localhost Enable SSL Mutual Authentication y/n (blank n): n SSL options: key store file: keystore.jks distinguished name: CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown password: secret validity: 365 alias: localhost Server keystore file keystore.jks, certificate file keystore.pem and keystore.csr file will be generated in server configuration directory. Do you confirm y/n: y Note Once the command is executed, the management CLI will reload the server. One-way SSL/TLS is now enabled for applications. Using Elytron Subsystem Commands In JBoss EAP, you can use the elytron subsystem, along with the undertow subsystem, to enable one-way SSL/TLS for deployed applications. Configure a key-store in JBoss EAP. If the keystore file does not exist yet, the following commands can be used to generate an example key pair: Configure a key-manager that references your key-store . Important Red Hat did not specify the algorithm attribute in the command, because the Elytron subsystem uses KeyManagerFactory.getDefaultAlgorithm() to determine an algorithm by default. However, you can specify the algorithm attribute. To specify the algorithm attribute, you need to know what key manager algorithms are provided by the JDK you are using. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. In the command you can specify SunX509 as the key manager algorithm attribute. Configure a server-ssl-context that references your key-manager . Important You need to determine what SSL/TLS protocols you want to support. The example command above uses TLSv1.2 . You can use the cipher-suite-filter argument to specify which cipher suites are allowed, and the use-cipher-suites-order argument to honor server cipher suite order. The use-cipher-suites-order attribute by default is set to true . This differs from the legacy security subsystem behavior, which defaults to honoring client cipher suite order. Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. Check and see if the https-listener is configured to use a legacy security realm for its SSL configuration. The above command shows that the https-listener is configured to use the ApplicationRealm legacy security realm for its SSL configuration. Undertow cannot reference both a legacy security realm and an ssl-context in Elytron at the same time so you must remove the reference to the legacy security realm. Note If the result is undefined , you do not need to remove the reference to the security realm in the step. Remove the reference to the legacy security realm, and update the https-listener to use the ssl-context from Elytron. Note https-listener must always have either a security-realm or ssl-context configured. When changing between the two configurations, the commands must be executed as a single batch, as shown below. Reload the server. One-way SSL/TLS is now enabled for applications. Note You can disable one-way SSL/TLS for deployed applications using the disable-ssl-http-server command. security disable-ssl-http-server This command does not delete the Elytron resources. It configures the system to use the ApplicationRealm legacy security realm for its SSL configuration. Using Management Console You can enable SSL for applications by configuring the undertow subsystem using an SSL wizard in the management console. To access the wizard: Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Web (Undertow) Server . Click the name of the server to configure. Click View . Navigate to Listener HTTPS Listener . Select the listener for which SSL is to be enabled, and click Enable SSL to launch the wizard. The wizard guides you through the following scenarios for enabling SSL: You want to create a certificate store and generate a self-signed certificate. You want to obtain a certificate from Let's Encrypt Certificate Authority. You already have the certificate store on the file system, but no keystore configuration. You already have a keystore configuration that uses a valid certificate store. Using the wizard, you can optionally create a truststore for mutual authentication. 1.5.2.2. Enable Two-way SSL/TLS for Applications Using the Elytron Subsystem Obtain or generate your client keystores: USD keytool -genkeypair -alias client -keyalg RSA -keysize 1024 -validity 365 -keystore client.keystore.jks -dname "CN=client" -keypass secret -storepass secret Export the client certificate: keytool -exportcert -keystore client.keystore.jks -alias client -keypass secret -storepass secret -file /path/to/client.cer Enable two-way SSL/TLS for deployed applications. In JBoss EAP, two-way SSL/TLS for deployed applications can be enabled either by using a security command or by using the elytron subsystem commands. Using a security command. The security enable-ssl-http-server command can be used to enable two-way SSL/TLS for the deployed applications. Note The following example does not validate the certificate as no chain of trust exists. If you are using a trusted certificate, then the client certificate can be validated without issue. Example: Wizard Usage security enable-ssl-http-server --interactive Please provide required pieces of information to enable SSL: Key-store file name (default default-server.keystore): server.keystore.jks Password (blank generated): secret What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct y/n [y]? Validity (in days, blank default): 365 Alias (blank generated): localhost Enable SSL Mutual Authentication y/n (blank n): y Client certificate (path to pem file): /path/to/client.cer Validate certificate y/n (blank y): n Trust-store file name (management.truststore): server.truststore.jks Password (blank generated): secret SSL options: key store file: server.keystore.jks distinguished name: CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown password: secret validity: 365 alias: localhost client certificate: /path/to/client.cer trust store file: server.trustore.jks trust store password: secret Server keystore file server.keystore.jks, certificate file server.pem and server.csr file will be generated in server configuration directory. Server truststore file server.trustore.jks will be generated in server configuration directory. Do you confirm y/n: y Note Once the command is executed, the management CLI will reload the server. To complete the two-way SSL/TLS authentication, you need to import the server certificate into the client truststore and configure your client to present the client certificate. Using elytron subsystem commands. In JBoss EAP, you can also use the elytron subsystem, along with the undertow subsystem, to enable two-way SSL/TLS for deployed applications. Obtain or generate your keystore. Before enabling two-way SSL/TLS in JBoss EAP, you must obtain or generate the keystores, truststores and certificates you plan on using. Create a server keystore: Note The command above uses an absolute path to the keystore. Alternatively you can use the relative-to attribute to specify the base directory variable and path specify a relative path. Export the server certificate: Create a keystore for the server truststore and import the client certificate into the server truststore. Note The following example does not validate the certificate as no chain of trust exists. If you are using a trusted certificate, then the client certificate can be validated without issue. Configure a key-manager that references your keystore key-store . Important Red Hat did not specify the algorithm attribute in the command, because the Elytron subsystem uses KeyManagerFactory.getDefaultAlgorithm() to determine an algorithm by default. However, you can specify the algorithm attribute. To specify the algorithm attribute, you need to know what key manager algorithms are provided by the JDK you are using. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. In command you can specify SunX509 as the key manager algorithm attribute. Configure a trust-manager that references your truststore key-store . Important Red Hat did not specify the algorithm attribute in the command, because the Elytron subsystem uses TrustManagerFactory.getDefaultAlgorithm() to determine an algorithm by default. However, you can specify the algorithm attribute. To specify the algorithm attribute, you need to know what trust manager algorithms are provided by the JDK you are using. For example, a JDK that uses SunJSSE provides the PKIX and SunX509 algorithms. In the command you can specify PKIX as the trust manager algorithm attribute. Configure a server-ssl-context that references your key-manager , trust-manager , and enables client authentication: Important You need to determine what SSL/TLS protocols you want to support. The example command above uses TLSv1.2 . You can use the cipher-suite-filter argument to specify which cipher suites are allowed, and the use-cipher-suites-order argument to honor server cipher suite order. The use-cipher-suites-order attribute by default is set to true . This differs from the legacy security subsystem behavior, which defaults to honoring client cipher suite order. Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. Check and see if the https-listener is configured to use a legacy security realm for its SSL configuration. The above command shows that the https-listener is configured to use the ApplicationRealm legacy security realm for its SSL configuration. Undertow cannot reference both a legacy security realm and an ssl-context in the elytron subsystem at the same time. So you must remove the reference to the legacy security realm. Note If the result is undefined , you do not need to remove the reference to the security realm in the step. Remove the reference to the legacy security realm, and update the https-listener to use the ssl-context from Elytron. Note https-listener must always have either a security-realm or ssl-context configured. When changing between the two configurations, the commands must be executed as a single batch, as shown below. Reload the server. Note To complete the two-way SSL/TLS authentication, you need to import the server certificate into the client truststore and configure your client to present the client certificate. Configure your client to use the client certificate. You need to configure your client to present the trusted client certificate to the server to complete the two-way SSL/TLS authentication. For example, if using a browser, you need to import the trusted certificate into the browser's trust store. This procedure forces a two-way SSL/TLS but it does not change the original authentication method of the application. If you want to change the original authentication method, see Configure Authentication with Certificates in How to Configure Identity Management for JBoss EAP. Two-way SSL/TLS is now enabled for applications. Note You can disable two-way SSL/TLS for deployed applications using the disable-ssl-http-server command. security disable-ssl-http-server This command does not delete the Elytron resources. It configures the system to use the ApplicationRealm legacy security realm for its SSL configuration. 1.5.3. Configuring Certificate Revocation Using CRL in Elytron Configure the trust manager used for enabling two-way SSL/TLS to use Certificate Revocation List (CRL) for certificate revocation in Elytron. Prerequisites The trust manager is configured to use two-way SSL/TLS. The trust manager contains the certificate chain to be checked for revocation. Procedure Configure the trust manager to use CRLs obtained from distribution points referenced in your certificates. Override the CRL obtained from distribution points referenced in your certificates. Configure trust-manager to use CRL for certificate revocation. If an OCSP responder is also configured for certificate revocation, add attribute ocsp.prefer-crls with the value true in the trust manager to use CRL for certificate revocation: If no OCSP responder is configured for certificate revocation, the configuration is complete. Additional Information For a complete list of CRL attributes, see trust-manager Attributes . 1.5.4. Configuring Certification Revocation Using OCSP in Elytron Configure the trust manager used for enabling two-way SSL/TLS to use an Online Certificate Status Protocol (OCSP) responder for certificate revocation. OCSP is defined in RFC6960 . When both OCSP responder and CRL are configured for certificate revocation, the OCSP responder is invoked by default. Prerequisites The trust manager is configured to use two-way SSL/TLS. Procedure Configure the trust manager to use the OCSP responder defined in the certificate for certificate revocation. Override OCSP responder defined in the certificate. Additional Information For a complete list of attributes, see online-certificate-status Attributes . 1.5.5. Using Legacy Security Realms Important As a prerequisite, an SSL/TLS encryption key and certificate should be created and placed in an accessible directory. Additionally, relevant information, such as keystore aliases and passwords, desired cipher suites, should also be accessible. For examples on generating SSL/TLS Keys and Certificates, see the first two steps in the Setting up Two-way SSL/TLS for the Management Interfaces section. For more information about the HTTPS listener, including cipher suites, see the HTTPS Listener Reference section. 1.5.5.1. Enable One-way SSL/TLS for Applications Using Legacy Security Realms This example assumes that the keystore, identity.jks , has been copied to the server configuration directory and configured with the given properties. Administrators should substitute their own values for the example ones. Note The management CLI commands shown assume that you are running a JBoss EAP standalone server. For more details on using the management CLI for a JBoss EAP managed domain, see the JBoss EAP Management CLI Guide . Add and configure an HTTPS security realm first. Once the HTTPS security realm has been configured, configure an https-listener in the undertow subsystem that references the security realm: Warning Red Hat recommends that SSLv2, SSLv3, and TLSv1.0 be explicitly disabled in favor of TLSv1.1 or TLSv1.2 in all affected packages. Restart the JBoss EAP instance for the changes to take effect. 1.5.5.2. Enable Two-way SSL/TLS for Applications Using Legacy Security Realms Setting up two-way SSL/TLS for applications follows many of the same procedures outlined in Setting up Two-way SSL/TLS for the Management Interfaces . To set up two-way SSL/TLS for applications, you need to do the following: Generate the stores for both the client and server Export the certificates for both the client and server Import the certificates into the opposing truststores Define a security realm, for example CertificateRealm , on the server that uses the server's keystore and truststore Update the undertow subsystem to use the security realm and require client verification The first four steps are covered in Setting up Two-way SSL/TLS for the Management Interfaces . Important If the server has not been reloaded since the new security realm has been added, you must reload the server before performing the step. Update the Undertow Subsystem Once the keystores, certificates, truststores, and security realms have been created and configured, you need to add an HTTPS listener to the undertow subsystem, use the security realm you created, and require client verification: Important You must reload the server for these changes to take effect. Important Any client connecting to a JBoss EAP instance with two-way SSL/TLS enabled for applications must have access to a client certificate or keystore, in other words a client keystore whose certificate is included in the server's truststore. If the client is using a browser to connect to the JBoss EAP instance, you need to import that certificate or keystore into the browser's certificate manager. Note More details on using certificate-based authentication in applications, in addition to two-way SSL/TLS with applications, can be found in the Configuring a Security Domain to Use Certificate-based Authentication section of the JBoss EAP How to Configure Identity Management Guide . 1.6. Enable HTTP authentication for applications using the CLI security command In JBoss EAP, HTTP authentication, using an elytron HTTP authentication factory, can be enabled for the undertow security domain with the security enable-http-auth-http-server command. By default this command associates the application HTTP factory to the specified security domain. Example: Enable HTTP Authentication on the Undertow Security Domain Note Once the command is executed, the management CLI will reload the server and reconnect to it. If an HTTP factory already exists, then the factory is updated to use the mechanism defined by the --mechanism argument. 1.6.1. Disabling HTTP authentication for the management interfaces This procedure describe how to disable HTTP authentication for the management interfaces. Procedure To remove the active HTTP authentication factory use the following command. Alternatively, you can use the following command to remove specific mechanisms from the active SASL authentication factory. 1.7. SASL Authentication Mechanisms Simple Authentication and Security Layer (SASL) authentication mechanisms are used for defining the mechanisms for authenticating connections to a JBoss EAP server using the elytron subsystem, and for clients connecting to servers. Clients can be other JBoss EAP instances, or Elytron Client. SASL authentication mechanisms in JBoss EAP are also significantly used in Elytron Integration with the Remoting Subsystem . 1.7.1. Choosing SASL Authentication Mechanisms Note Although JBoss EAP and Elytron Client work with a variety of SASL authentication mechanisms, you must ensure that the mechanisms you use are supported. See this list for the support levels for SASL authentication mechanisms . The authentication mechanisms you use depends on your environment and desired authentication method. The following list summarizes the use of some of the supported SASL authentication mechanisms : ANONYMOUS Unauthenticated guest access. DIGEST-MD5 Uses HTTP digest authentication as a SASL mechanism. EXTERNAL Uses authentication credentials that are implied in the context of the request. For example, IPsec or TLS authentication. Mechanisms beginning with GS Authentication using Kerberos. JBOSS-LOCAL-USER Provides authentication by testing that the client has the same file access as the local user that is running the JBoss EAP server. This is useful for other JBoss EAP instances running on the same machine. OAUTHBEARER Uses authentication provided by OAuth as a SASL mechanism. PLAIN Plain text username and password authentication. Mechanisms beginning with SCRAM Salted Challenge Response Authentication Mechanism (SCRAM) that uses a specified hashing function. Mechanisms ending with -PLUS Indicates a channel binding variant of a particular authentication mechanism. You should use these variants when the underlying connection uses SSL/TLS. For more information on individual SASL authentication mechanisms, see the IANA SASL mechanism list and individual mechanism memos . 1.7.2. Configuring SASL Authentication Mechanisms on the Server Side Configuring SASL authentication mechanisms on the server side is done using SASL authentication factories. There are two levels of configuration required: A sasl-authentication-factory , where you specify authentication mechanisms. A configurable-sasl-server-factory that aggregates one or more of sasl-authentication-factory , and configures mechanism properties as well as optionally applying filters to enable or disable certain authentication mechanisms. The following example demonstrates creating a new configurable-sasl-server-factory , and a sasl-authentication-factory that uses DIGEST-MD5 as a SASL authentication mechanism for application clients. 1.7.3. Specifying SASL Authentication Mechanisms on the Client Side SASL authentication mechanisms used by a client are specified using a sasl-mechanism-selector . You can specify any supported SASL authentication mechanisms that are exposed by the server that the client is connecting to. A sasl-mechanism-selector is defined in Elytron configurations where authentication is configured: In the elytron subsystem, this is an attribute of an authentication-configuration . For example: An example of using an authentication-configuration with a sasl-mechanism-selector can be seen in Configuring SSL or TLS with elytron . For Elytron Client, it is specified under the configuration element of authentication-configurations in the client configuration file, usually named wildfly-config.xml . For example: <configuration> <authentication-client xmlns="urn:elytron:client:1.2"> <authentication-rules> <rule use-configuration="default" /> </authentication-rules> <authentication-configurations> <configuration name="default"> <sasl-mechanism-selector selector="#ALL" /> ... </configuration> </authentication-configurations> </authentication-client> </configuration> See How to Configure Identity Management for more information on configuring client authentication with Elytron Client . sasl-mechanism-selector Grammar The selector string for sasl-mechanism-selector has a specific grammar. In a simple form, individual mechanisms are specified by listing their names in order, separated by a spaces. For example, to specify DIGEST-MD5, SCRAM-SHA-1, and SCRAM-SHA-256 as allowed authentication mechanisms, use the following string: DIGEST-MD5 SCRAM-SHA-1 SCRAM-SHA-256 . More advanced usage of the grammar can use the following special tokens: #ALL : All mechanisms. #FAMILY( NAME ) : Mechanisms belonging to the specified mechanism family. For example, the family could be DIGEST, EAP, GS2, SCRAM, or IEC-ISO-9798. #PLUS : Mechanisms that use channel binding. For example, SCRAM-SHA- XXX -PLUS or GS2- XXX -PLUS. #MUTUAL : Mechanisms that authenticate the server in some way, for example making the server prove that the server knows the password. #MUTUAL includes families such as #FAMILY(SCRAM) and #FAMILY(GS2) . #HASH( ALGORITHM ) : Mechanisms that use the specified hash algorithm. For example, the algorithm could be MD5, SHA-1, SHA-256, SHA-384, or SHA-512. The above tokens and names can also be used with the following operations and predicates: - : Forbids ! : Inverts && : And || : Or == : Equals ? : If #TLS : Is true when TLS is active, otherwise false. Below are some examples of sasl-mechanism-selector strings and their meaning: #TLS && !#MUTUAL : When TLS is active, all mechanisms without mutual authentication. #ALL -ANONYMOUS : All mechanisms, except for ANONYMOUS. SCRAM-SHA-1 SCRAM-SHA-256 : Adds those two mechanisms in that order. (SCRAM-SHA-1 || SCRAM-SHA-256) : Adds the two mechanisms in the order that the provider or server presents them. !#HASH(MD5) : Any mechanism that does not use the MD5 hashing algorithm. #FAMILY(DIGEST) : Any digest mechanism. 1.7.4. Configuring SASL Authentication Mechanism Properties You can configure authentication mechanism properties on both the server side and on the client side. On the server side, you define authentication mechanism properties in the configurable-sasl-server-factory . The following example defines the com.sun.security.sasl.digest.utf8 property with a value of false . On the client side, you define authentication mechanisms properties in the client's authentication configuration: In the elytron subsystem, define the authentication mechanism properties in your authentication-configuration . The following example defines the wildfly.sasl.local-user.quiet-auth property with a value of true . For Elytron Client, authentication mechanism properties are specified under the configuration element of authentication-configurations in the client configuration file, usually named wildfly-config.xml . For example: ... <authentication-configurations> <configuration name="default"> <sasl-mechanism-selector selector="#ALL" /> <set-mechanism-properties> <property key="wildfly.sasl.local-user.quiet-auth" value="true" /> </set-mechanism-properties> ... </configuration> </authentication-configurations> ... You can see a list of standard Java SASL authentication mechanism properties in the Java documentation . Other JBoss EAP-specific SASL authentication mechanism properties are listed in the Authentication Mechanisms Reference . 1.8. Elytron Integration with the ModCluster Subsystem One of the security capabilities exposed by elytron subsystem is a client ssl-context that can be used to configure the modcluster subsystem to communicate with a load balancer using SSL/TLS. When protecting the communication between the application server and the load balancer, you need to define a client ssl-context in order to: Define a truststore holding the certificate chain that will be used to validate load balancer's certificate. Define a trust manager to perform validations against the load balancer's certificate. 1.8.1. Defining a Client SSL Context and Configuring ModCluster Subsystem The following procedure requires that a truststore and trust manager be configured. For information on creating these see Create an Elytron Truststore and Create an Elytron Trust Manager . Create the client SSL context. This SSL context is going to be used by the modcluster subsystem when connecting to the load balancer using SSL/TLS: Reference the newly created client SSL context using one of the following options. Configure the modcluster subsystem by setting the ssl-context . Configure the undertow subsystem by defining the ssl-context attribute of the mod-cluster filter. Reload the server. For configuring the modcluster subsystem and using two-way authentication , along with the trust manager, the key manager also needs to be configured. Create the keystore. Configure the key manager. Create the client SSL context. Note If you already have an existing client SSL context, you can add the key-manager to it as follows: Reload the server. 1.9. Elytron Integration with the JGroups Subsystem Components in the elytron subsystem may be referenced when defining authorization or encryption protocols in the jgroups subsystem. Full instructions on configuring these protocols are found in the Securing a Cluster section of the Configuration Guide . 1.10. Elytron Integration with the Remoting Subsystem 1.10.1. Elytron integration with remoting connectors A remoting connector is specified by a SASL authentication factory, a socket binding, and an optional SSL context. In particular, the attributes for a connector are as follows: sasl-authentication-factory A reference to the SASL authentication factory to use for authenticating requests to this connector. For more information on creating this factory, see Create an Elytron Authentication Factory . socket-binding A reference to the socket binding, detailing the interface and port where the connector should listen for incoming requests. ssl-context An optional reference to the server-side SSL Context to use for this connector. The SSL Context contains the server key manager and trust manager to be used, and should be defined in instances where SSL is desired. For example, a connector can be added as follows, where SASL_FACTORY_NAME is an already defined authentication factory and SOCKET_BINDING_NAME is an existing socket binding. If SSL is desired, a preconfigured server-ssl-context may be referenced using the ssl-context attribute, as seen below. 1.10.1.1. Enabling one-way SSL/TLS for remoting connectors using the elytron subsystem The following SASL mechanisms support channel binding to external secure channels, such as SSL/TLS: GS2-KRB5-PLUS SCRAM-SHA-1-PLUS SCRAM-SHA-256-PLUS SCRAM-SHA-384-PLUS SCRAM-SHA-512-PLUS To use any of these mechanisms, you can configure a custom SASL factory , or modify one of the predefined SASL authentication factories. A SASL mechanism selector can be used on the client to specify the appropriate SASL mechanism. Prerequisites A key-store is configured. A key-manager is configured. A server-ssl-context is configured that references the defined key-manager Procedure Create a socket-binding for the connector. The following command defines the oneWayBinding binding that listens on port 11199 . Create a connector that references the SASL authentication factory, the previously created socket binding, and the SSL context. Important In cases where you have both a security-realm and ssl-context defined, JBoss EAP will use the SSL/TLS configuration provided by ssl-context . Configure the client to trust the server certificate. A generic example client is found at Elytron Client Side One Way Example . This example configures an ssl-context using the client trust-store . Additional resources key-store key-manager server-ssl-context 1.10.1.2. Enabling two-way SSL/TLS for remoting connectors using the elytron subsystem The following SASL mechanisms support channel binding to external secure channels, such as SSL/TLS: GS2-KRB5-PLUS SCRAM-SHA-1-PLUS SCRAM-SHA-256-PLUS SCRAM-SHA-384-PLUS SCRAM-SHA-512-PLUS To use any of these mechanisms, you can configure a custom SASL factory , or modify one of the predefined SASL authentication factories to offer any of these mechanisms. A SASL mechanism selector can be used on the client to specify the appropriate SASL mechanism. Prerequisites Separate key-store components for the client and server certificates are configured. A key-manager for the server key-store is configured. A trust-manager for the server trust-store is configured. A server-ssl-context that references the defined key-manager and trust-manager is configured. Procedure Create a socket-binding for the connector. The following command defines the twoWayBinding binding that listens on port 11199 . Create a connector that references the SASL authentication factory, the previously created socket binding, and the SSL context. Important In cases where you have both a security-realm and ssl-context defined, JBoss EAP will use the SSL/TLS configuration provided by ssl-context . Configure your client to trust the server certificate, and to present its certificate to the server. You need to configure your client to present the trusted client certificate to the server to complete the two-way SSL/TLS authentication. For example, if using a browser, you need to import the trusted certificate into the browser's truststore. A generic example client is found at Elytron Client Side Two Way Example . This example configures an ssl-context using the client trust-store and key-store . Two-way SSL/TLS is now enabled on the remoting connector. Additional resources key-store key-manager trust-manager trust-store server-ssl-context SASL mechanism selector 1.10.2. Elytron integration with remoting HTTP connectors A remote HTTP connection is specified by referencing a connector in the undertow subsystem and a SASL authentication factory defined in the elytron subsystem. The HTTP connector provides the configuration for the HTTP upgrade-based remoting connector, and connects to an HTTP listener specified by the connector-ref attribute. The attributes for a http-connector are as follows: connector-ref A reference to a predefined undertow listener. sasl-authentication-factory A reference to the SASL authentication factory to use for authenticating requests to this connector. For more information on creating this factory, see Create an Elytron Authentication Factory . For example, a http-connector can be added as follows, where CONNECTOR_NAME references the undertow listener, and SASL_FACTORY_NAME is an already defined authentication factory in the elytron subsystem. 1.10.2.1. Enabling one-way SSL on the remoting HTTP connector The following SASL mechanisms support channel binding to external secure channels, such as SSL/TLS: GS2-KRB5-PLUS SCRAM-SHA-1-PLUS SCRAM-SHA-256-PLUS SCRAM-SHA-384-PLUS SCRAM-SHA-512-PLUS To use any of the above mechanisms, a custom SASL factory can be configured, or one of the predefined SASL authentication factories can be modified to offer any of these mechanisms. A SASL mechanism selector can be used on the client to specify the appropriate SASL mechanism. Prerequisites A key-store is configured. A key-manager is configured. A server-ssl-context is configured that references the defined key-manager . Procedure Check whether the https-listener is configured to use a legacy security realm for its SSL configuration. The above command shows that the https-listener is configured to use the ApplicationRealm legacy security realm for its SSL configuration. Undertow cannot reference both a legacy security realm and an ssl-context in Elytron at the same time so you must remove the reference to the legacy security realm. Note If the result is undefined , you do not need to remove the reference to the security realm in the step. Remove the reference to the legacy security realm, and update the https-listener to use the ssl-context from Elytron. Note https-listener must always have either a security-realm or ssl-context configured. When changing between the two configurations, the commands must be executed as a single batch, as shown below. Create an HTTP connector that references the HTTPS listener and the SASL authentication factory. Reload the server. Configure the client to trust the server certificate. For example, if using a browser, you need to import the trusted certificate into the browser's truststore. Additional resources key-store key-manager server-ssl-context custom SASL factory 1.10.2.2. Enabling two-way SSL/TLS on the remoting HTTP connectors The following SASL mechanisms support channel binding to external secure channels, such as SSL/TLS: GS2-KRB5-PLUS SCRAM-SHA-1-PLUS SCRAM-SHA-256-PLUS SCRAM-SHA-384-PLUS SCRAM-SHA-512-PLUS To use any of the above mechanisms, a custom SASL factory can be configured, or one of the predefined SASL authentication factories can be modified to offer any of these mechanisms. A SASL mechanism selector can be used on the client to specify the appropriate SASL mechanism. Prerequisites Separate key-store components for the client and server certificates are configured. A key-manager for the server key-store is configured. A trust-manager for the server trust-store is configured. A server-ssl-context that references the defined key-manager and trust-manager is configured. Procedure Check whether the https-listener is configured to use a legacy security realm for its SSL configuration. The above command shows that the https-listener is configured to use the ApplicationRealm legacy security realm for its SSL configuration. Undertow cannot reference both a legacy security realm and an ssl-context in Elytron at the same time so you must remove the reference to the legacy security realm. Note If the result is undefined , you do not need to remove the reference to the security realm in the step. Remove the reference to the legacy security realm, and update the https-listener to use the ssl-context from Elytron. Note https-listener must always have either a security-realm or ssl-context configured. When changing between the two configurations, the commands must be executed as a single batch, as shown below. Create an HTTP connector that references the HTTPS listener and the SASL authentication factory. Reload the server. Configure your client to trust the server certificate, and to present its certificate to the server. Complete the two-way SSL/TLS authentication by configuring your client to present the trusted client certificate to the server. For example, if using a browser, you need to import the trusted certificate into the browser's truststore. Two-way SSL/TLS is now enabled on the remoting HTTP connector. Important In cases where you have both a security-realm and ssl-context defined, JBoss EAP will use the SSL/TLS configuration provided by ssl-context . Additional resources key-store key-manager trust-manager trust-store server-ssl-context custom SASL factory 1.10.3. Elytron integration with remoting outbound connectors A remote outbound connection is specified by an outbound socket binding and an authentication context. The authentication context provides all of the security information that is needed for the connection. In particular, the attributes for a remote-outbound-connection are as follows: outbound-socket-binding-ref - The name of the outbound socket binding, which is used to determine the destination address and port for the connection. authentication-context - A reference to the authentication context, which contains the authentication configuration and the defined SSL context, if one exists, required for the connection. For information on defining an authentication context, see Creating an Authentication Context . For example, a remote-outbound-connection can be added as follows, where OUTBOUND_SOCKET_BINDING_NAME is an already defined outbound-socket-binding and AUTHENTICATION_CONTEXT_NAME is an authentication-context that has already been defined in the elytron subsystem configuration. Additional resources Creating an Authentication Context 1.11. Additional Elytron Components for SSL/TLS The basic concepts behind configuring one-way SSL/TLS and two-way SSL/TLS are covered in the following: Enable One-way SSL/TLS for Applications Using the Elytron Subsystem Enable Two-way SSL/TLS for Applications Using the Elytron Subsystem Enable One-way SSL/TLS for the Management Interfaces Using the Elytron Subsystem Enable Two-way SSL/TLS for the Management Interfaces Using the Elytron Subsystem Elytron also offers some additional components for configuring SSL/TLS. 1.11.1. Using an ldap-key-store An ldap-key-store allows you to use a keystore stored in an LDAP server. You can use an ldap-key-store in the same way as you use a key-store . Note It is not possible to use a Jakarta Management ObjectName to decrypt the LDAP credentials. Instead, credentials can be secured by using a credential store. For information about credential stores, see Credential store in Elytron . To create and use an ldap-key-store : Configure a dir-context . To connect to the LDAP server from JBoss EAP, you need to configure a dir-context that provides the URL as well as the principal used to connect to the server. Example: dir-context Configure an ldap-key-store . When you configure an ldap-key-store , you need to specify both the dir-context used to connect to the LDAP server as well as how to locate the keystore stored in the LDAP server. At a minimum, this requires you to specify a search-path . Example: ldap-key-store Use the ldap-key-store . Once you have defined your ldap-key-store , you can use it in the same places where a key-store could be used. For example, you could use an ldap-key-store when configuring One-way SSL/TLS and Two-way SSL/TLS for applications. For the full list of attributes for ldap-key-store as well as other Elytron components, see Elytron Subsystem Components Reference . 1.11.2. Using a filtering-key-store A filtering-key-store allows you to expose a subset of aliases from an existing key-store , and use it in the same places you could use a key-store . For example, if a keystore contained alias1 , alias2 , and alias3 , but you only wanted to expose alias1 and alias3 , a filtering-key-store provides you several ways to do that. To create a filtering-key-store : Configure a key-store . Configure a filtering-key-store . When you configure a filtering-key-store , you specify which key-store you want to filter and the alias-filter for filtering aliases from the key-store . The filter can be specified in one of the following formats: alias1,alias3 , which is a comma-delimited list of aliases to expose. ALL:-alias2 , which exposes all aliases in the keystore except the ones listed. NONE:+alias1:+alias3 , which exposes no aliases in the keystore except the ones listed. This example uses a comma-delimted list to expose alias1 and alias3 . Note The alias-filter attribute is case sensitive. Because the use of mixed-case or uppercase aliases, such as elytronAppServer , might not be recognized by some keystore providers, it is recommended to use lowercase aliases, such as elytronappserver . Use the filtering-key-store . Once you have defined your filtering-key-store , you can use it in the same places where a key-store could be used. For example, you could use a filtering-key-store when configuring One-way SSL/TLS and Two-way SSL/TLS for applications. For the full list of attributes for filtering-key-store as well as other Elytron components, see Elytron Subsystem Components Reference . 1.11.3. Reload a Keystore You can reload a keystore configured in JBoss EAP from the management CLI. This is useful in cases where you have made changes to certificates referenced by a keystore. To reload a keystore: 1.11.4. Reinitialize a Key Manager You can reinitialize a key-manager configured in JBoss EAP from the management CLI. This is useful in cases where you have made changes in certificates provided by keystore resource and you want to apply this change to new SSL connections without restarting the server. Note If the key-store is file based then it must be loaded first. To reinitialize a key-manager : 1.11.5. Reinitialize a Trust Manager You can reinitialize a trust-manager configured in JBoss EAP from the management CLI or the management console. This is useful when you have made changes to certificates provided by a keystore resource and want to apply the changes to the new SSL connections without restarting the server. Reinitializing a Trust Manager from the management CLI Note If the key-store is file based then it must be loaded first. To reinitialize a trust-manager : Reinitializing a Trust Manager from the management console Navigate to the management console and click the Runtime tab. In the Monitor column, click Security (Elytron) . In the Security column, click SSL View . On the navigation pane, click Trust Manager . Click Initialize on the top right corner of the screen to reinitialize a trust-manager . 1.11.6. Keystore Alias The alias denotes the stored secret or credential in the store. If you add a keystore to the elytron subsystem using the key-store component, you can check the keystore's contents using the alias related key-store operations. The different operations for alias manipulation are: read-alias - Read an alias from a keystore. read-aliases - Read aliases from a keystore. remove-alias - Remove an alias from a keystore. For example, to read an alias: 1.11.7. Using a client-ssl-context A client-ssl-context is used for providing an SSL context when the JBoss EAP instance creates an SSL connection as a client, such as using SSL in remoting. To create a client-ssl-context : Create key-store , key-manager , and trust-manager components as needed. If establishing a two-way SSL/TLS connection, you need to create separate key-store components for the client and server certificates, a key-manager for the client key-store , and a trust-manager for the server key-store . Alternatively, if you are doing a one-way SSL/TLS connection, you need to create a key-store for the server certificate and a trust-manager that references it. Examples on creating keystores and truststores are available in the Enable Two-way SSL/TLS for Applications using the Elytron Subsystem section. Create a client-ssl-context . Create a client-ssl-context referencing keystores, truststores, as well as any other necessary configuration options. Example: client-ssl-context Reference the client-ssl-context . For the full list of attributes for client-ssl-context as well as other Elytron components, see Elytron Subsystem Components Reference . 1.11.8. Using a server-ssl-context A server-ssl-context is used for providing a server-side SSL context. In addition to the usual configuration for an SSL context, it is possible to configure additional items such as cipher suites and protocols. The SSL context will wrap any additional items that are configured. Create key-store , key-manager , and trust-manager components as needed. If establishing a two-way SSL/TLS connection, you need to create separate key-store components for the client and server certificates, a key-manager for the server key-store , and a trust-manager for the server trust-store . Alternatively, if you are doing a one-way SSL/TLS connection, you need to create a key-store for the server certificate and a key-manager that references it. Examples on creating keystores and truststores are available in the Enable Two-way SSL/TLS for Applications Using the Elytron Subsystem section. Create a server-ssl-context . Create a server-ssl-context that references the key manager, trust manager, or any other desired configuration options using one of the options outlined below. Add a Server SSL Context Using the Management CLI Important You need to determine what HTTPS protocols will be supported. The example commands above use TLSv1.2 . You can use the cipher-suite-filter argument to specify which cipher suites are allowed, and the use-cipher-suites-order argument to honor server cipher suite order. The use-cipher-suites-order attribute by default is set to true . This differs from the legacy security subsystem behavior, which defaults to honoring client cipher suite order. Add a Server SSL Context Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Other Settings and click View . Click on SSL Server SSL Context and click Add to configure a new server SSL context. For the full list of attributes for server-ssl-context as well as other Elytron components, see Elytron Subsystem Components Reference . 1.11.9. Using a server-ssl-sni-context A server-ssl-sni-context is used for providing a server-side SNI matching. It provides matching rules to correlate host names to SSL contexts, along with a default in case none of the provided host names are matched. The SSL SNI contexts can be used in place of a standard server SSL context, such as when defining a context in the undertow subsystem. Create key-store , key-manager , trust-manager , and server-ssl-context components as needed. There must be a server SSL context defined to create the server-ssl-sni-context . Create a server-ssl-sni-context that provides matching information for the server-ssl-context elements. A default SSL context must be specified, using the default-ssl-context attribute, which will be used if no matching host names are found. The host-context-map accepts a comma-separated list of host names to match to the various SSL contexts. The following would be used to define a server-ssl-sni-context that defaults to the serverSSL SSL context, and matches incoming requests for www.example.com to the exampleSSL context. Note The attribute value for host matching works as a regular expression, so be sure to escape any periods (.) used to delimit the domain name. Configure server-ssl-sni-context Using the Management Console Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Configuration Subsystems Security (Elytron) Other Settings and click View . Click SSL Server SSL SNI Context to configure the required ssl-sni-context . For the complete list of attributes for Elytron components, see Elytron Subsystem Components Reference . 1.11.10. Custom SSL Components When configuring SSL/TLS in the elytron subsystem, you can provide and use custom implementations of the following components: key-store key-manager trust-manager client-ssl-context server-ssl-context Warning It is not recommended to provide custom implementations of any component outside of the trust-manager without an intimate knowledge of the Java Secure Socket Extension (JSSE). Important When using FIPS it is not possible to utilize a custom trust manager or key manager, as FIPS requires these managers be embedded in the JDK for security reasons. Similar behavior can be accomplished by implementing a SecurityRealm that validates X509 evidences. When creating custom implementations of Elytron components, they must present the appropriate capabilities and requirements. For more details on capabilities and requirements, see the Capabilities and Requirements section of the JBoss EAP Security Architecture guide. Implementation details for each component are provided by the JDK vendor. 1.11.10.1. Add a Custom Component to Elytron The following steps describe adding a custom component within Elytron. Add the JAR containing the provider for the custom component as a module into JBoss EAP, declaring any required dependencies, such as javax.api : Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. For more information, see the Create a Custom Module Manually and Remove a Custom Module Manually sections of the JBoss EAP Configuration Guide . Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. When the component is added to the elytron subsystem the java.util.ServiceLoader will be used to discover the provider. Alternatively, a reference to the provider can be provided by defining a provider-loader . There are two methods of creating the loader, and only one should be implemented for each component. Reference the provider directly when defining the provider-loader : Include a reference to the provider in META-INF/services/java.security.Provider . This reference is automatically created when using the @MetaInfServices annotation in org.kohsuke.metainf-services . When using this method only the module needs to be referenced by the provider-loader , as seen below: Add the custom component into Elytron's configuration, using the appropriate element for the type to be added and referencing any defined providers. For instance, to define a trust manager, the trust-manager element would be used, as seen in the following command: Example: Adding a Custom Trust Manager Once defined, the component can be referenced from other elements. Additional resources For more information see modules and dependencies . 1.11.10.2. Including Arguments in a Custom Elytron Component You can include arguments within a custom component if your class implements the initialize method, as seen below. void initialize(final Map<String, String> configuration); This method allows the custom class to receive a set of configuration strings when defined. These are passed in using the configuration attribute when defining the component. For instance, the following example defines an attribute named myAttribute with a value of myValue . 1.11.10.3. Using Custom Trust Managers with Elytron By implementing a custom trust manager, it is possible to extend the validation of certificates when using HTTPS in Undertow, LDAPS in a dir-context , or any place where Elytron is used for SSL connections. This component is responsible for making trust decisions for the server, and it is strongly recommended that these be implemented if a custom trust manager is used. Important When using FIPS it is not possible to utilize a custom trust manager, as FIPS requires this manager be embedded in the JDK for security reasons. Similar behavior can be accomplished by implementing a SecurityRealm that validates X509 evidences. Requirements for Implementing a Custom Trust Manager When using a custom trust manager, the following must be implemented: A trust manager that implements the X509ExtendedTrustManager interface. A trust manager factory that extends TrustManagerFactorySpi . The provider of the trust manager factory. The provider must be included in the JAR file to be added into JBoss EAP. Any implemented classes must be included in JBoss EAP as a module. Classes are not required to be in one module, and can be loaded from module dependencies. Example Implementations The following example demonstrates a provider that registers the custom trust manager factory as a service. Example: Provider import org.kohsuke.MetaInfServices; import javax.net.ssl.TrustManagerFactory; import java.security.Provider; import java.util.Collections; import java.util.List; import java.util.Map; @MetaInfServices(Provider.class) public class CustomProvider extends Provider { public CustomProvider() { super("CustomProvider", 1.0, "Demo provider"); System.out.println("CustomProvider initialization."); final List<String> emptyList = Collections.emptyList(); final Map<String, String> emptyMap = Collections.emptyMap(); putService(new Service(this, TrustManagerFactory.class.getSimpleName(),"CustomAlgorithm", CustomTrustManagerFactorySpi.class.getName(), emptyList, emptyMap)); } } The following example demonstrates a custom trust manager. This trust manager contains overloaded methods on checking if a client or server is trusted. Example: TrustManager import javax.net.ssl.SSLEngine; import javax.net.ssl.X509ExtendedTrustManager; import java.net.Socket; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; public class CustomTrustManager extends X509ExtendedTrustManager { public void checkClientTrusted(X509Certificate[] x509Certificates, String s, Socket socket) throws CertificateException { // Insert your code here } public void checkServerTrusted(X509Certificate[] x509Certificates, String s, Socket socket) throws CertificateException { // Insert your code here } public void checkClientTrusted(X509Certificate[] x509Certificates, String s, SSLEngine sslEngine) throws CertificateException { // Insert your code here } public void checkServerTrusted(X509Certificate[] x509Certificates, String s, SSLEngine sslEngine) throws CertificateException { // Insert your code here } public void checkClientTrusted(X509Certificate[] x509Certificates, String s) throws CertificateException { // Insert your code here } public void checkServerTrusted(X509Certificate[] x509Certificates, String s) throws CertificateException { // Insert your code here } public X509Certificate[] getAcceptedIssuers() { // Insert your code here } } The following example is a factory used to return instances of the trust manager. Example: TrustManagerFactorySpi import javax.net.ssl.ManagerFactoryParameters; import javax.net.ssl.TrustManager; import javax.net.ssl.TrustManagerFactorySpi; import java.security.InvalidAlgorithmParameterException; import java.security.KeyStore; import java.security.KeyStoreException; public class CustomTrustManagerFactorySpi extends TrustManagerFactorySpi { protected void engineInit(KeyStore keyStore) throws KeyStoreException { // Insert your code here } protected void engineInit(ManagerFactoryParameters managerFactoryParameters) throws InvalidAlgorithmParameterException { // Insert your code here } protected CustomTrustManager[] engineGetTrustManagers() { // Insert your code here } } Adding the Custom Trust Manager Once the provider and trust manager have been created, add them to the elytron subsystem by using the steps outlined in Add a Custom Component to Elytron . 1.11.11. Default SSLContext Many libraries used within deployments might require SSL configuration for connections they establish. These libraries tend to be configurable by the caller. If no configuration is provided, they use the default SSLContext for the process. The default SSLContext is available using the following method call: javax.net.ssl.SSLContext.getDefault(); By default this SSLContext is configured using system properties. However, within the elytron subsystem, it is possible to specify which one of the configured contexts should be associated and used as the default. To make use of this feature, configure your SSLContext as normal. The following command can then be used to specify which SSLContext should be used as the default. As existing services and deployments could have cached the default SSLContext prior to this being set, a reload is required to ensure that the default gets set before the deployments are activated. If the default-ssl-context attribute is subsequently undefined , the standard APIs do not provide any mechanism to revert the default. In this situation, the Java process would need be restarted. 1.11.12. Using a Certificate Revocation List If you want to validate a certificate against a certificate revocation list (CRL), you can configure this using the certificate-revocation-list attribute for a trust manager in the elytron subsystem. For example: For more information on the available attributes for a trust manager, see the trust-manager attributes table table. Note Your truststore must contain the certificate chain in order to check the validity of both the certification revocation list and the certificate. The truststore should not contain end-entity certificates, just certificate authority and intermediate certificates. You can instruct the trust manager to reload the certificate revocation list by using the reload-certificate-revocation-list operation. 1.11.13. Using a Certificate Authority to Manage Signed Certificates You can obtain and manage signed certificates using the JBoss EAP management CLI and the management console. This allows you to create a signed certificate directly from the CLI or the console and then import it into the required keystore. Note Many of the commands in this section have an optional staging parameter that indicates whether the certificate authority's staging URL should be used. This value defaults to false , and is designed to assist in testing purposes. This parameter should never be enabled in a production environment. Configure a Let's Encrypt Account As of JBoss EAP 7.4, Let's Encrypt is the only supported certificate authority. To manage signed certificates an account must be created with the certificate authority, and the following information provided: A keystore to contain the alias of the certificate authority account key. The alias of the certificate authority. If the provided alias does not exist in the given keystore, then one will be created and stored as a private key entry. An optional list of URLs, such as email addresses, that the certificate authority can contact in the result of any issues. Create an Account with the Certificate Authority Once an account has been configured it may be created with the certificate authority by agreeing to their terms of service. Update an Account with the Certificate Authority The certificate authority account options can be updated using the update-account command. Change the Account Key Associated with the Certificate Authority The key associated with the certificate authority account can be changed by using the change-account-key command. Deactivate the Account with the Certificate Authority If the account is no longer desired, then it may be deactivated by using the deactivate-account command. Get the Metadata Associated with the Certificate Authority The metadata for the account can be queried with the get-metadata command. This provides the following information: A URL to the terms of service. A URL to the certificate authority website. A list of the certificate authority accounts. Whether or not an external account is required. Configure a Let's Encrypt Account Using Management Console To configure a Let's Encrypt account using the management console: Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Runtime Host Security (Elytron) SSL and click View . Click Certificate Auth... to open the Certificate Authority Account page. You can perform the following configurations for the selected alias by clicking the buttons with the labels: Create Create an account with a certificate authority. Deactivate Deactivate the selected certificate authority account. Update Update the selected account with the certificate authority. Get Metadata View the following information about the certificate authority account: Associated alias Certificate authority name Contact details Keystore name Certificate authority details Change Account Key Change the associated key with the certificate authority. 1.11.14. Keystore Manipulation Operations You can perform various keystore manipulation operations on an Elytron key-store resource using the management CLI and the management console. Keystore Manipulation Operations Using the Management CLI Using the management CLI, you can perform the following keystore manipulation operations: Generate a key pair. The generate-key-pair command generates a key pair and wraps the resulting public key in a self-signed X.509 certificate. The generated private key and self-signed certificate will be added to the keystore. Generate a certificate signing request. The generate-certificate-signing-request command generates a PKCS #10 certificate signing request using a PrivateKeyEntry from the keystore. The generated certificate signing request will be written to a file. Import a certificate or certificate chain. The import-certificate command imports a certificate or certificate chain from a file into an entry in the keystore. Export a certificate. The export-certificate command exports a certificate from an entry in the keystore to a file. Change an alias. The change-alias command moves an existing keystore entry to a new alias. Store changes made to keystores. The store command persists any changes that have been made to the file that backs the keystore. Keystore Manipulation Operations Using the Management Console To perform the operations using the management console: Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Runtime Security (Elytron) Stores and click View . Click Key Store to open the keystore definitions page. Click the required keystore name. You can perform the following operations for the selected keystore by clicking the buttons with the labels: Load Load or reload the keystore. Store Persist changes made to the file backing the keystore. Generate Key Pair Generate a key pair, wrap the public key in a self-signed X.509 certificate, and add the private key and the certificate to the keystore. Import Certificate Import a certificate chain to the keystore from a file. Obtain Obtain a signed certificate from a Certificate Authority and store it in the keystore. 1.11.14.1. Keystore Certificate Authority Operations You can perform the following operations on the keystore after you Configure a Let's Encrypt Account . Note Many of the commands in this section have an optional staging parameter that indicates whether the certificate authority's staging URL should be used. This value defaults to false , and is designed to assist in testing purposes. This parameter should never be enabled in a production environment. Keystore Certificate Authority Operations Using the Management CLI Using the management CLI, you can perform the following keystore certificate authority operations: Obtain a Signed Certificate. Once a certificate authority account has been defined for the keystore, you can use the obtain-certificate command to obtain a signed certificate and store it in the keystore. If an account with the certificate authority does not exist, then it will be automatically created. Revoke a signed certificate. The revoke-certificate command revokes a certificate that was issued by the certificate authority. Check if a signed certificate is due for renewal. The should-renew-certificate command determines if a certificate is due for renewal. The command returns true if the certificate expires in less than the given number of days, and false otherwise. The following command determines if the certificate expires in the 7 days. Keystore Certificate Authority Operations Using the Management Console To perform the operations using the management console: Access the management console. For more information, see the Management Console section in the JBoss EAP Configuration Guide . Navigate to Runtime Security (Elytron) Stores and click View . Click Key Store to open the keystore definitions page. Click Aliases to the required keystore name. Click the required alias name. You can perform the following operations for the selected alias by clicking on the buttons with the labels: Change Alias Change the alias for the entry. Export Certificate Export a certificate from a keystore entry to a file. Generate CSR Generate a certificate signing request. Remove Alias Remove the selected alias from the keystore. Details View the details of the certificate associated with the alias. Revoke Revoke the certificate associated with the alias. Verify Renew Determine if the associated certificate is due for renewal. 1.11.15. Configuring Evidence Decoder for X.509 Certificate with Subject Alternative Name Extension By default, the principal associated with an X.509 certificate in Elytron is the subject name in the certificate and the principal associated with an X.509 certificate chain is the subject name in the first certificate in a certificate chain. You can configure an X509-subject-alt-name-evidence-decoder to use subject alternative name extension in an X.509 certificate as the principal. The subject alternative name extension specification for an X.509 certificate and an X.509 certificate chain is defined in RFC 5280 . Prerequisites You know the expected format of a client certificate, or you have a client certificate available locally. Procedure Identify which subject alternative name extension to use. If you have the client certificate locally, the subject alternative name extension can be viewed using the keytool command: The subject alternative name extension is listed as: Create an x509-subject-alt-name-evidence-decoder to use the identified subject alternative name: To use the evidence decoder, reference it in a security-domain: Additional resources x509-subject-alternative-name-evidence-decoder Attributes 1.11.16. Configuring an Aggregate Evidence Decoder You can configure an aggregate evidence decoder to combine two or more evidence decoders. The evidence decoders are applied in the configured order until an evidence decoder returns a non-null principal or until there are no more evidence decoders left to try. Prerequisites The evidence decoders to be aggregated are configured. For information about configuring evidence decoder, see Configuring Evidence Decoder for X.509 Certificate with Subject Alternative Name Extension . Procedure Create an aggregate evidence decoder from existing the evidence decoders: To use the evidence decoder, reference it in a security domain: 1.11.17. Configuring X.500 Subject Evidence Decoder Configure x500-subject-evidence-decoder to extract the subject from the first certificate in a certificate chain. Procedure Create an x.500 subject evidence decoder: 1.11.18. Using Custom Evidence Decoder Implementation You can use a custom org.wildfly.security.auth.server.EvidenceDecoder implementation in Elytron by adding it as a module to JBoss EAP. Procedure Package the custom implementation class as a Java Archive (JAR). Add a module to JBoss EAP containing the JAR. For information about adding modules to JBoss EAP, see Create a Custom Module section in the Configuration Guide . Add the custom evidence decoder to Elytron:
[ "<interfaces> <interface name=\"management\"> <inet-address value=\"USD{jboss.bind.address.management:127.0.0.1}\"/> </interface> <interface name=\"public\"> <inet-address value=\"USD{jboss.bind.address:127.0.0.1}\"/> </interface> </interfaces>", "<socket-binding-group name=\"standard-sockets\" default-interface=\"public\" port-offset=\"USD{jboss.socket.binding.port-offset:0}\"> <socket-binding name=\"management-http\" interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> <socket-binding name=\"management-https\" interface=\"management\" port=\"USD{jboss.management.https.port:9993}\"/> <socket-binding name=\"ajp\" port=\"USD{jboss.ajp.port:8009}\"/> <socket-binding name=\"http\" port=\"USD{jboss.http.port:8080}\"/> <socket-binding name=\"https\" port=\"USD{jboss.https.port:8443}\"/> <socket-binding name=\"txn-recovery-environment\" port=\"4712\"/> <socket-binding name=\"txn-status-manager\" port=\"4713\"/> <outbound-socket-binding name=\"mail-smtp\"> <remote-destination host=\"localhost\" port=\"25\"/> </outbound-socket-binding> </socket-binding-group>", "EAP_HOME /bin/jboss-cli.sh --file=EAP_HOME/docs/examples/enable-elytron.cli", "/subsystem=elytron/security-domain=domainName:add(realms=[{realm=realmName,role-decoder=roleDecoderName}],default-realm=realmName,permission-mapper=permissionMapperName,role-mapper=roleMapperName,...)", "/subsystem=elytron/type-of-realm=realmName:add(....)", "/subsystem=elytron/ROLE-DECODER-TYPE=roleDeoderName:add(....)", "/subsystem=elytron/source-address-role-decoder=decoder1:add(source-address=\"10.10.10.10\", roles=[\"Administrator\"])", "/subsystem=elytron/security-domain=domainName:add(role-decoder=decoder1,default-realm=realmName,realms=[{realm=realmName}])", "/subsystem=elytron/aggregate-role-decoder=aggregateDecoder:add(role-decoders=[decoder1, decoder2])", "/subsystem=elytron/ROLE-MAPPER-TYPE=roleMapperName:add(...)", "/subsystem=elytron/permission-set=PermissionSetName:add(permissions=[{class-name=\"...\", module=\"...\", target-name=\"...\", action=\"...\"}...])", "/subsystem=elytron/simple-permission-mapper=PermissionMapperName:add(...)", "/subsystem=elytron/authentication-configuration= AUTHENTICATION_CONFIGURATION_NAME :add(authentication-name= AUTHENTICATION_NAME , credential-reference={clear-text= PASSWORD })", "/subsystem=elytron/authentication-context= AUTHENTICATION_CONTEXT_NAME :add()", "/subsystem=elytron/authentication-context= AUTHENTICATION_CONTEXT_NAME :add(match-rules=[{authentication-configuration= AUTHENTICATION_CONFIGURATION_NAME , match-host=localhost}])", "/subsystem=elytron/AUTH-FACTORY-TYPE=authFactoryName:add(....)", "keytool -genkeypair -alias localhost -keyalg RSA -keysize 1024 -validity 365 -keystore keystore.jks -dname \"CN=localhost\" -keypass secret -storepass secret", "/subsystem=elytron/key-store=newKeyStore:add(path=keystore.jks,relative-to=jboss.server.config.dir,credential-reference={clear-text=secret},type=JKS)", "/subsystem=elytron/key-manager=newKeyManager:add(key-store= KEY_STORE ,credential-reference={clear-text=secret})", "/subsystem=elytron/key-store=default-trust-store:add(type=JKS, relative-to=jboss.server.config.dir, path=application.truststore, credential-reference={clear-text=password})", "/subsystem=elytron/trust-manager=default-trust-manager:add(key-store=TRUST-STORE-NAME)", "/extension=org.wildfly.extension.elytron:add()", "/subsystem=elytron:add reload", "/subsystem=elytron:remove reload", "/subsystem=security:remove", "/subsystem=security:add", "<security-realms> <security-realm name=\"ManagementRealm\"> <authentication> <local default-user=\"USDlocal\" skip-group-loading=\"true\"/> <properties path=\"mgmt-users.properties\" relative-to=\"jboss.server.config.dir\"/> </authentication> <authorization map-groups-to-roles=\"false\"> <properties path=\"mgmt-groups.properties\" relative-to=\"jboss.server.config.dir\"/> </authorization> </security-realm> <security-realm name=\"ApplicationRealm\"> <authentication> <local default-user=\"USDlocal\" allowed-users=\"*\" skip-group-loading=\"true\"/> <properties path=\"application-users.properties\" relative-to=\"jboss.server.config.dir\"/> </authentication> <authorization> <properties path=\"application-roles.properties\" relative-to=\"jboss.server.config.dir\"/> </authorization> </security-realm> </security-realms>", "/core-service=management/security-realm= <new_realm_name> :add()", "[standalone@localhost:9990 /] /core-service=management:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { \"access\" => {...}, \"ldap-connection\" => undefined, \"management-interface\" => {\"http-interface\" => { \"allowed-origins\" => undefined, \"console-enabled\" => true, \"http-authentication-factory\" => \"management-http-authentication\", \"http-upgrade\" => { \"enabled\" => true, \"sasl-authentication-factory\" => \"management-sasl-authentication\" }, \"http-upgrade-enabled\" => true, \"sasl-protocol\" => \"remote\", \"secure-socket-binding\" => undefined, \"security-realm\" => undefined, \"server-name\" => undefined, \"socket-binding\" => \"management-http\", \"ssl-context\" => undefined }}, \"security-realm\" => {...}, \"service\" => undefined } }", "/core-service=management/management-interface=http-interface/:write-attribute(name=console-enabled,value=false)", "/subsystem=jmx/remoting-connector=jmx/:remove", "[standalone@localhost:9990 /] /subsystem=elytron/sasl-authentication-factory=managenet-sasl-authentication:read-resource { \"outcome\" => \"success\", \"result\" => { \"mechanism-configurations\" => [ { \"mechanism-name\" => \"JBOSS-LOCAL-USER\", \"realm-mapper\" => \"local\" }, { \"mechanism-name\" => \"DIGEST-MD5\", \"mechanism-realm-configurations\" => [{\"realm-name\" => \"ManagementRealm\"}] } ], \"sasl-server-factory\" => \"configured\", \"security-domain\" => \"ManagementDomain\" } } [standalone@localhost:9990 /] /subsystem=elytron/sasl-authentication-factory=managenet-sasl-authentication:list-remove(name=mechanism-configurations, index=0) [standalone@localhost:9990 /] reload", "/core-service=management/security-realm= <realm_name> /authentication=local:remove", "security enable-ssl-management --interactive Please provide required pieces of information to enable SSL: Key-store file name (default management.keystore): keystore.jks Password (blank generated): secret What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct y/n [y]? Validity (in days, blank default): 365 Alias (blank generated): localhost Enable SSL Mutual Authentication y/n (blank n): n SSL options: key store file: keystore.jks distinguished name: CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown password: secret validity: 365 alias: localhost Server keystore file keystore.jks, certificate file keystore.pem and keystore.csr file will be generated in server configuration directory. Do you confirm y/n :y", "security disable-ssl-management", "/subsystem=elytron/key-store=httpsKS:add(path=keystore.jks,relative-to=jboss.server.config.dir,credential-reference={clear-text=secret},type=JKS)", "/subsystem=elytron/key-store=httpsKS:generate-key-pair(alias=localhost,algorithm=RSA,key-size=1024,validity=365,credential-reference={clear-text=secret},distinguished-name=\"CN=localhost\") /subsystem=elytron/key-store=httpsKS:store()", "/subsystem=elytron/key-manager=httpsKM:add(key-store=httpsKS,credential-reference={clear-text=secret}) /subsystem=elytron/server-ssl-context=httpsSSC:add(key-manager=httpsKM,protocols=[\"TLSv1.2\"])", "/core-service=management/management-interface=http-interface:write-attribute(name=ssl-context, value=httpsSSC) /core-service=management/management-interface=http-interface:write-attribute(name=secure-socket-binding, value=management-https)", "reload", "keytool -genkeypair -alias client -keyalg RSA -keysize 1024 -validity 365 -keystore client.keystore.jks -dname \"CN=client\" -keypass secret -storepass secret", "keytool -exportcert -keystore client.keystore.jks -alias client -keypass secret -storepass secret -file /path/to/client.cer", "security enable-ssl-management --interactive Please provide required pieces of information to enable SSL: Key-store file name (default management.keystore): server.keystore.jks Password (blank generated): secret What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct y/n [y]? Validity (in days, blank default): 365 Alias (blank generated): localhost Enable SSL Mutual Authentication y/n (blank n): y Client certificate (path to pem file): /path/to/client.cer Validate certificate y/n (blank y): n Trust-store file name (management.truststore): server.truststore.jks Password (blank generated): secret SSL options: key store file: server.keystore.jks distinguished name: CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown password: secret validity: 365 alias: localhost client certificate: /path/to/client.cer trust store file: server.trustore.jks trust store password: secret Server keystore file server.keystore.jks, certificate file server.pem and server.csr file will be generated in server configuration directory. Server truststore file server.trustore.jks will be generated in server configuration directory. Do you confirm y/n: y", "security disable-ssl-management", "/subsystem=elytron/key-store=twoWayKS:add(path=server.keystore.jks,relative-to=jboss.server.config.dir,credential-reference={clear-text=secret},type=JKS) /subsystem=elytron/key-store=twoWayKS:generate-key-pair(alias=localhost,algorithm=RSA,key-size=1024,validity=365,credential-reference={clear-text=secret},distinguished-name=\"CN=localhost\") /subsystem=elytron/key-store=twoWayKS:store()", "/subsystem=elytron/key-store=twoWayKS:export-certificate(alias=localhost,path=/path/to/server.cer,pem=true)", "/subsystem=elytron/key-store=twoWayTS:add(path=server.truststore.jks,relative-to=jboss.server.config.dir,credential-reference={clear-text=secret},type=JKS) /subsystem=elytron/key-store=twoWayTS:import-certificate(alias=client,path=/path/to/client.cer,credential-reference={clear-text=secret},trust-cacerts=true,validate=false) /subsystem=elytron/key-store=twoWayTS:store()", "/subsystem=elytron/key-manager=twoWayKM:add(key-store=twoWayKS,credential-reference={clear-text=secret}) /subsystem=elytron/trust-manager=twoWayTM:add(key-store=twoWayTS) /subsystem=elytron/server-ssl-context=twoWaySSC:add(key-manager=twoWayKM,protocols=[\"TLSv1.2\"],trust-manager=twoWayTM,want-client-auth=true,need-client-auth=true)", "/core-service=management/management-interface=http-interface:write-attribute(name=ssl-context, value=twoWaySSC) /core-service=management/management-interface=http-interface:write-attribute(name=secure-socket-binding, value=management-https)", "reload", "security enable-sasl-management Server reloaded. Command success. Authentication configured for management http-interface sasl authentication-factory=management-sasl-authentication security-domain=ManagementDomain", "security disable-sasl-management", "security disable-sasl-management --mechanism= MECHANISM", "security reorder-sasl-management --mechanisms-order= MECHANISM1 , MECHANISM2 ,", "security enable-http-auth-management Server reloaded. Command success. Authentication configured for management http-interface http authentication-factory=management-http-authentication security-domain=ManagementDomain", "security disable-http-auth-management", "security disable-http-auth-management --mechanism= MECHANISM", "keytool -genkeypair -alias appserver -storetype jks -keyalg RSA -keysize 2048 -keypass password1 -keystore EAP_HOME /standalone/configuration/identity.jks -storepass password1 -dname \"CN=appserver,OU=Sales,O=Systems Inc,L=Raleigh,ST=NC,C=US\" -validity 730 -v", "/core-service=management/management-interface=http-interface:write-attribute(name=secure-socket-binding, value=management-https) /core-service=management/management-interface=http-interface:undefine-attribute(name=socket-binding)", "/host=master/core-service=management/management-interface=http-interface:write-attribute(name=secure-port,value=9993) /host=master/core-service=management/management-interface=http-interface:undefine-attribute(name=port)", "/socket-binding-group=standard-sockets/socket-binding=management-https:read-resource(recursive=true) { \"outcome\" => \"success\", \"result\" => { \"client-mappings\" => undefined, \"fixed-port\" => false, \"interface\" => \"management\", \"multicast-address\" => undefined, \"multicast-port\" => undefined, \"name\" => \"management-https\", \"port\" => expression \"USD{jboss.management.https.port:9993}\" } }", "touch EAP_HOME /standalone/configuration/https-mgmt-users.properties", "/core-service=management/security-realm=ManagementRealmHTTPS:add /core-service=management/security-realm=ManagementRealmHTTPS/authentication=properties:add(path=https-mgmt-users.properties,relative-to=jboss.server.config.dir)", "EAP_HOME /bin/add-user.sh -up EAP_HOME /standalone/configuration/https-mgmt-users.properties -r ManagementRealmHTTPS Enter the details of the new user to add. Using realm 'ManagementRealmHTTPS' as specified on the command line. Username : httpUser Password requirements are listed below. To modify these restrictions edit the add-user.properties configuration file. - The password must not be one of the following restricted values {root, admin, administrator} - The password must contain at least 8 characters, 1 alphabetic character(s), 1 digit(s), 1 non-alphanumeric symbol(s) - The password must be different from the username Password : Re-enter Password : About to add user 'httpUser' for realm 'ManagementRealmHTTPS' Is this correct yes/no? yes .. Added user 'httpUser' to file 'EAP_HOME/configuration/https-mgmt-users.properties' Is this new user going to be used for one AS process to connect to another AS process? e.g. for a slave host controller connecting to the master or for a Remoting connection for server to server EJB calls. yes/no? no", "/core-service=management/management-interface=http-interface:write-attribute(name=security-realm,value=ManagementRealmHTTPS)", "/core-service=management/security-realm=ManagementRealmHTTPS/server-identity=ssl:add(keystore-path=identity.jks,keystore-relative-to=jboss.server.config.dir,keystore-password=password1, alias=appserver)", "/core-service=management/security-realm=ManagementRealmHTTPS/server-identity=ssl:write-attribute(name=keystore-password,value=newpassword)", "reload", "13:50:54,160 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0061: Http management interface listening on https://127.0.0.1:9993/management 13:50:54,162 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0052: Admin console listening on https://127.0.0.1:9993", "<jboss-cli xmlns=\"urn:jboss:cli:2.0\"> <default-protocol use-legacy-override=\"true\">https-remoting</default-protocol> <!-- The default controller to connect to when 'connect' command is executed w/o arguments --> <default-controller> <protocol>https-remoting</protocol> <host>localhost</host> <port>9993</port> </default-controller>", "./jboss-cli.sh -c Unable to connect due to unrecognised server certificate Subject - CN=appserver,OU=Sales,O=Systems Inc,L=Raleigh,ST=NC,C=US Issuer - CN=appserver, OU=Sales, O=Systems Inc, L=Raleigh, ST=NC, C=US Valid From - Tue Jun 28 13:38:48 CDT 2016 Valid To - Thu Jun 28 13:38:48 CDT 2018 MD5 : 76:f4:81:8b:7e:c3:be:6d:ee:63:c1:7a:b7:b8:f0:fb SHA1 : ea:e3:f1:eb:53:90:69:d0:c9:69:4a:5a:a3:20:8f:76:c1:e6:66:b6 Accept certificate? [N]o, [T]emporarily, [P]ermenantly : p Authenticating against security realm: ManagementRealmHTTPS Username: httpUser Password: [standalone@localhost:9993 /]", "keytool -genkeypair -alias HOST1_alias -keyalg RSA -keysize 1024 -validity 365 -keystore HOST1.keystore.jks -dname \"CA_HOST1\" -keypass secret -storepass secret keytool -genkeypair -alias HOST2_alias -keyalg RSA -keysize 1024 -validity 365 -keystore HOST2.keystore.jks -dname \"CA_HOST2\" -keypass secret -storepass secret", "keytool -exportcert -keystore HOST1.keystore.jks -alias HOST1_alias -keypass secret -storepass secret -file HOST1.cer keytool -exportcert -keystore HOST2.keystore.jks -alias HOST2_alias -keypass secret -storepass secret -file HOST2.cer", "keytool -importcert -keystore HOST1.truststore.jks -storepass secret -alias HOST2_alias -trustcacerts -file HOST2.cer keytool -importcert -keystore HOST2.truststore.jks -storepass secret -alias HOST1_alias -trustcacerts -file HOST1.cer", "/core-service=management/security-realm=CertificateRealm:add() /core-service=management/security-realm=CertificateRealm/server-identity=ssl:add(keystore-path=/path/to/HOST1.keystore.jks, keystore-password=secret,alias=HOST1_alias) /core-service=management/security-realm=CertificateRealm/authentication=truststore:add(keystore-path=/path/to/HOST1.truststore.jks,keystore-password=secret)", "/core-service=management/management-interface=http-interface:write-attribute(name=security-realm,value=CertificateRealm)", "<ssl> <alias>HOST2_alias</alias> <key-store>/path/to/HOST2.keystore.jks</key-store> <key-store-password>secret</key-store-password> <trust-store>/path/to/HOST2.truststore.jks</trust-store> <trust-store-password>secret</trust-store-password> <modify-trust-store>true</modify-trust-store> </ssl>", "<ssl> <vault> <vault-option name=\"KEYSTORE_URL\" value=\"path-to/vault/vault.keystore\"/> <vault-option name=\"KEYSTORE_PASSWORD\" value=\"MASK-5WNXs8oEbrs\"/> <vault-option name=\"KEYSTORE_ALIAS\" value=\"vault\"/> <vault-option name=\"SALT\" value=\"12345678\"/> <vault-option name=\"ITERATION_COUNT\" value=\"50\"/> <vault-option name=\"ENC_FILE_DIR\" value=\"EAP_HOME/vault/\"/> </vault> <alias>HOST2_alias</alias> <key-store>/path/to/HOST2.keystore.jks</key-store> <key-store-password>VAULT::VB::cli_pass::1</key-store-password> <key-password>VAULT::VB::cli_pass::1</key-password> <trust-store>/path/to/HOST2.truststore.jks</trust-store> <trust-store-password>VAULT::VB::cli_pass::1</trust-store-password> <modify-trust-store>true</modify-trust-store> </ssl>", "/subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=enabled-cipher-suites,value=\"TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA\")", "/subsystem=elytron:undefine-attribute(name=final-providers) reload", "/subsystem=elytron:write-attribute(name=initial-providers, value=combined-providers)", "/subsystem=elytron/server-ssl-context=serverSSC:write-attribute(name=providers,value=openssl) reload", "/subsystem=elytron/server-ssl-context=serverSSC:write-attribute(name=protocols,value=[TLSv1.3])", "/subsystem=elytron/server-ssl-context=serverSSC:write-attribute(name=cipher-suite-names,value=TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256)", "reload", "curl -v https:// <ip_address> : <ssl_port>", "SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use h2 * Server certificate: * subject: C=Unknown; ST=Unknown; L=Unknown; O=Unknown; OU=Unknown; CN=localhost * start date: Oct 6 14:58:16 2020 GMT * expire date: Nov 5 15:58:16 2020 GMT * issuer: C=Unknown; ST=Unknown; L=Unknown; O=Unknown; OU=Unknown; CN=localhost * SSL certificate verify result: self signed certificate (18), continuing anyway.", "mkdir -p /usr/share/jboss-as/nssdb chown jboss /usr/share/jboss-as/nssdb modutil -create -dbdir /usr/share/jboss-as/nssdb", "name = nss-fips nssLibraryDirectory=/usr/lib64 nssSecmodDirectory=/usr/share/jboss-as/nssdb nssDbMode = readOnly nssModule = fips", "security.provider.1=sun.security.pkcs11.SunPKCS11 /usr/share/jboss-as/nss_pkcsll_fips.cfg", "security.provider.5=com.sun.net.ssl.internal.ssl.Provider", "security.provider.5=com.sun.net.ssl.internal.ssl.Provider SunPKCS11-nss-fips", "modutil -fips true -dbdir /usr/share/jboss-as/nssdb", "modutil -changepw \"NSS FIPS 140-2 Certificate DB\" -dbdir /usr/share/jboss-as/nssdb", "certutil -S -k rsa -n undertow -t \"u,u,u\" -x -s \"CN=localhost, OU=MYOU, O=MYORG, L=MYCITY, ST=MYSTATE, C=MY\" -d /usr/share/jboss-as/nssdb", "keytool -list -storetype pkcs11", "10:16:13,993 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-1) MSC000001: Failed to start service jboss.server.controller.management.security_realm.ApplicationRealm.key-manager: org.jboss.msc.service.StartException in service jboss.server.controller.management.security_realm.ApplicationRealm.key-manager: WFLYDM0018: Unable to start service at org.jboss.as.domain.management.security.AbstractKeyManagerService.start(AbstractKeyManagerService.java:85) at org.jboss.msc.service.ServiceControllerImplUSDStartTask.startService(ServiceControllerImpl.java:1963) at org.jboss.msc.service.ServiceControllerImplUSDStartTask.run(ServiceControllerImpl.java:1896) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutorUSDWorker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.security.KeyStoreException: FIPS mode: KeyStore must be from provider SunPKCS11-nss-fips at sun.security.ssl.KeyManagerFactoryImplUSDSunX509.engineInit(KeyManagerFactoryImpl.java:67) at javax.net.ssl.KeyManagerFactory.init(KeyManagerFactory.java:256) at org.jboss.as.domain.management.security.AbstractKeyManagerService.createKeyManagers(AbstractKeyManagerService.java:130) at org.jboss.as.domain.management.security.AbstractKeyManagerService.start(AbstractKeyManagerService.java:83) ... 5 more", "JAVA_OPTS=\"USDJAVA_OPTS -Djavax.net.ssl.trustStore=NONE -Djavax.net.ssl.trustStoreType=PKCS11\" JAVA_OPTS=\"USDJAVA_OPTS -Djavax.net.ssl.keyStore=NONE -Djavax.net.ssl.keyStoreType=PKCS11 -Djavax.net.ssl.keyStorePassword=P@ssword123\"", "<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"truststore\" type=\"PKCS11\"> <key-store-clear-password password=\"P@ssword123\"/> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"client-cli-context\"> <trust-store key-store-name=\"truststore\"/> <cipher-suite selector=\"USD{cipher.suite.filter}\"/> <protocol names=\"TLSv1.1\"/> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"client-cli-context\"/> </ssl-context-rules> </authentication-client> </configuration>", "jboss-cli.sh -Dwildfly.config.url=cli-wildfly-config.xml", "/subsystem=elytron/key-store=fipsKS:add(type=PKCS11,provider-name=\"SunPKCS11-nss-fips\",credential-reference={clear-text=\"P@ssword123\"}) /subsystem=elytron/key-manager=fipsKM:add(key-store=fipsKS,algorithm=\"SunX509\",provider-name=SunPKCS11-nss-fips,credential-reference={clear-text=\"P@ssword123\"}) /subsystem=elytron/server-ssl-context=fipsSSC:add(key-manager=fipsKM,protocols=[\"TLSv1.1\"])", "batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context,value=fipsSSC) run-batch reload", "batch /core-service=management/security-realm=HTTPSRealm:add /core-service=management/security-realm=HTTPSRealm/server-identity=ssl:add(keystore-provider=PKCS11, keystore-password=\"strongP@ssword1\") /subsystem=undertow/server=default-server/https-listener=https:add(socket-binding=https, security-realm=HTTPSRealm, enabled-protocols=\"TLSv1.1\") run-batch", "SSL_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_anon_WITH_AES_128_CBC_SHA, TLS_ECDH_anon_WITH_AES_256_CBC_SHA", "/subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=enabled-cipher-suites,value=\"SSL_RSA_WITH_3DES_EDE_CBC_SHA,SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_DSS_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,TLS_ECDH_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA,TLS_ECDH_anon_WITH_AES_128_CBC_SHA,TLS_ECDH_anon_WITH_AES_256_CBC_SHA\")", "/core-service=management/security-realm=HTTPSRealm/server-identity=ssl:write-attribute(name=enabled-cipher-suites, value=[SSL_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_RSA_WITH_AES_128_CBC_SHA, TLS_ECDH_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDH_anon_WITH_3DES_EDE_CBC_SHA, TLS_ECDH_anon_WITH_AES_128_CBC_SHA, TLS_ECDH_anon_WITH_AES_256_CBC_SHA])", "keytool -genkeypair -alias ALIAS -keyalg RSA -keysize 2048 -keypass PASSWORD -keystore KEYSTORE -storetype BCFKS -storepass STORE_PASSWORD", "<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"truststore\" type=\"BCFKS\"> <file name=\"USD{truststore.location}\" /> <key-store-clear-password password=\"USD{password}\" /> </key-store> <key-store name=\"keystore\" type=\"BCFKS\"> <file name=\"USD{keystore.location}\" /> <key-store-clear-password password=\"USD{password}\" /> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"client-cli-context\"> <key-store-ssl-certificate algorithm=\"PKIX\" key-store-name=\"keystore\"> <key-store-clear-password password=\"USD{password\"} /> </key-store-ssl-certificate> <trust-store key-store-name=\"truststore\"/> <trust-manager algorithm=\"PKIX\"> </trust-manager> <cipher-suite selector=\"TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA256,TLS_DHE_DSS_WITH_AES_256_CBC_SHA,TLS_DHE_DSS_WITH_AES_256_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CCM,TLS_RSA_WITH_AES_128_CCM\"/> <protocol names=\"TLSv1.2\"/> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"client-cli-context\"/> </ssl-context-rules> </authentication-client> </configuration>", "jboss-cli.sh -Dwildfly.config.url=cli-wildfly-config.xml", "/subsystem=elytron/key-store=fipsKS:add(path= KEYSTORE ,relative-to=jboss.server.config.dir,credential-reference={clear-text= STORE_PASSWORD },type=\"BCFKS\") /subsystem=elytron/key-manager=fipsKM:add(key-store=fipsKS,algorithm=\"PKIX\",credential-reference={clear-text= PASSWORD }) /subsystem=elytron/server-ssl-context=fipsSSC:add(key-manager=fipsKM,protocols=[\"TLSv1.2\"],cipher-suite-filter=\"TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA256,TLS_RSA_WITH_AES_256_CCM,TLS_RSA_WITH_AES_128_CCM\")", "batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context,value=fipsSSC) run-batch reload", "keytool -list -storetype JCEKS -keystore mystore.jck -storepass mystorepass -providerClass com.ibm.crypto.fips.provider.IBMJCEFIPS", "<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <key-stores> <key-store name=\"truststore\" type=\"JKS\"> <file name=\"/path/to/truststore\"/> <key-store-clear-password password=\"P@ssword123\"/> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"client-cli-context\"> <trust-store key-store-name=\"truststore\"/> <cipher-suite selector=\"USD{cipher.suite.filter}\"/> <protocol names=\"TLSv1\"/> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"client-cli-context\"/> </ssl-context-rules> </authentication-client> </configuration>", "04:22:45,685 INFO [stdout] (http-/127.0.0.1:8443-1) JsseJCE: Using MessageDigest SHA from provider IBMJCEFIPS version 1.7 04:22:45,689 INFO [stdout] (http-/127.0.0.1:8443-1) DHCrypt: DH KeyPairGenerator from provider from init IBMJCEFIPS version 1.7 04:22:45,754 INFO [stdout] (http-/127.0.0.1:8443-1) JsseJCE: Using KeyFactory DiffieHellman from provider IBMJCEFIPS version 1.7 04:22:45,754 INFO [stdout] (http-/127.0.0.1:8443-1) JsseJCE: Using KeyAgreement DiffieHellman from provider IBMJCEFIPS version 1.7 04:22:45,754 INFO [stdout] (http-/127.0.0.1:8443-1) DHCrypt: DH KeyAgreement from provider IBMJCEFIPS version 1.7 04:22:45,754 INFO [stdout] (http-/127.0.0.1:8443-1) DHCrypt: DH KeyAgreement from provider from initIBMJCEFIPS version 1.7", "<security-realm name=\"HTTPSRealm\"> <server-identities> <ssl> <engine enabled-protocols=\"TLSv1.1\"/> <keystore provider=\"PKCS11\" keystore-password=\"strongP@ssword1\"/> </ssl> </server-identities> <authentication> <local default-user=\"\\USDlocal\"/> <properties path=\"https-users.properties\" relative-to=\"jboss.domain.config.dir\"/> </authentication> </security-realm>", "<security-realm name=\"HTTPSRealm\"> <authentication> <truststore provider=\"PKCS11\" keystore-password=\"strongP@ssword1\"/> </authentication> </security-realm>", "<management-interfaces> <http-interface security-realm=\"HTTPSRealm\"> <http-upgrade enabled=\"true\"/> <socket interface=\"management\" port=\"USD{jboss.management.http.port:9990}\"/> </http-interface> </management-interfaces>", "<domain-controller> <remote security-realm=\"HTTPSRealm\"> <discovery-options> <static-discovery name=\"primary\" protocol=\"USD{jboss.domain.master.protocol:remote}\" host=\"USD{jboss.domain.master.address}\" port=\"USD{jboss.domain.master.port:9990}\"/> </discovery-options> </remote> </domain-controller>", "<server name=\"my-server\" group=\"my-server-group\"> <ssl ssl-protocol=\"TLS\" trust-manager-algorithm=\"PKIX\" truststore-type=\"PKCS11\" truststore-password=\"strongP@ssword1\"/> </server>", "/host=master/core-service=management/security-realm=HTTPSRealm/authentication=truststore:add(keystore-provider=\"PKCS11\",keystore-password=\"strongP@ssword1\") reload --host=master", "/host=host1/core-service=management/security-realm=HTTPSRealm/server-identity=ssl:add(keystore-provider=PKCS11, keystore-password=\"strongP@ssword1\",enabled-protocols=[\"TLSv1.1\"]) reload --host=host1", "RHSSO_HOME /bin/standalone.sh -Djboss.socket.binding.port-offset=100", "EAP_HOME /bin/jboss-cli.sh --file=adapter-elytron-install-offline.cli", "Create a realm for both JBoss EAP console and mgmt interface /subsystem=keycloak/realm=wildfly-infra:add(auth-server-url=http://localhost:8180/auth,realm-public-key= REALM_PUBLIC_KEY ) Create a secure-deployment in order to protect mgmt interface /subsystem=keycloak/secure-deployment=wildfly-management:add(realm=wildfly-infra,resource=wildfly-management,principal-attribute=preferred_username,bearer-only=true,ssl-required=EXTERNAL) Protect HTTP mgmt interface with Keycloak adapter /core-service=management/management-interface=http-interface:undefine-attribute(name=security-realm) /subsystem=elytron/http-authentication-factory=keycloak-mgmt-http-authentication:add(security-domain=KeycloakDomain,http-server-mechanism-factory=wildfly-management,mechanism-configurations=[{mechanism-name=KEYCLOAK,mechanism-realm-configurations=[{realm-name=KeycloakOIDCRealm,realm-mapper=keycloak-oidc-realm-mapper}]}]) /core-service=management/management-interface=http-interface:write-attribute(name=http-authentication-factory,value=keycloak-mgmt-http-authentication) /core-service=management/management-interface=http-interface:write-attribute(name=http-upgrade, value={enabled=true, sasl-authentication-factory=management-sasl-authentication}) Enable RBAC where roles are obtained from the identity /core-service=management/access=authorization:write-attribute(name=provider,value=rbac) /core-service=management/access=authorization:write-attribute(name=use-identity-roles,value=true) Create a secure-server in order to publish the JBoss EAP console configuration via mgmt interface /subsystem=keycloak/secure-server=wildfly-console:add(realm=wildfly-infra,resource=wildfly-console,public-client=true) reload reload", "EAP_HOME /bin/standalone.sh", "EAP_HOME /bin/jboss-cli.sh --connect --file=protect-eap-mgmt-services.cli", "/subsystem=elytron/file-audit-log= <audit_log_name> :add(path=\" <path_to_log_file> \", relative-to=\" <base_for_path_to_log_file> \", format= <format_type> , synchronized= <whether_to_log_immediately> )", "/subsystem=elytron/security-domain= <security_domain_name> :write-attribute(name=security-event-listener , value= <audit_log_name> )", "/subsystem=elytron/periodic-rotating-file-audit-log= <periodic_audit_log_name> :add(path=\" <periodic_audit_log_filename> \", relative-to=\" <path_to_audit_log_directory> \", format= <record_format> , synchronized= <whether_to_log_immediately> ,suffix=\" <suffix_in_DateTimeFormatter_format> \")", "/subsystem=elytron/security-domain= <security_domain_name> :write-attribute(name=security-event-listener, value= <periodic_audit_log_name> )", "/subsystem=elytron/size-rotating-file-audit-log= <audit_log_name> :add(path=\"<path_to_log_file>\",relative-to=\"<base_for_path_to_log_file>\",format= <record_format> ,synchronized= <whether_to_log_immediately> ,rotate-size=\" <max_file_size_before_rotation> \",max-backup-index= <max_number_of_backup_files> )", "/subsystem=elytron/security-domain= <domain_size_logger> :write-attribute(name=security-event-listener, value= <audit_log_name> )", "\"Elytron audit logging enabled with RFC format: <format>\"", "/subsystem=elytron/syslog-audit-log= <syslog_audit_log_name> :add(host-name= <record_host_name> , port= <syslog_server_port_number> , server-address= <syslog_server_address> , format= <record_format> , transport= <transport_layer_protocol> )", "/subsystem=elytron/syslog-audit-log= <syslog_audit_log_name> :add(transport=SSL_TCP,server-address= <syslog_server_address> ,port= <syslog_server_port_number> ,host-name= <record_host_name> ,ssl-context= <client_ssl_context> )", "/subsystem=elytron/security-domain= <security_domain_name> :write-attribute(name=security-event-listener, value= <syslog_audit_log_name> )", "public class MySecurityEventListener implements Consumer<SecurityEvent> { public void accept(SecurityEvent securityEvent) { if (securityEvent instanceof SecurityAuthenticationSuccessfulEvent) { System.err.printf(\"Authenticated user \\\"%s\\\"\\n\", securityEvent.getSecurityIdentity().getPrincipal()); } else if (securityEvent instanceof SecurityAuthenticationFailedEvent) { System.err.printf(\"Failed authentication as user \\\"%s\\\"\\n\", ((SecurityAuthenticationFailedEvent)securityEvent).getPrincipal()); } } }", "/subsystem=elytron/custom-security-event-listener= <listener_name> :add(module= <module_name> , class-name= <class_name> )", "/subsystem=elytron/security-domain= <domain_name> :write-attribute(name=security-event-listener, value= <listener_name> )", "reload", "15:26:09,031 WARN [org.jboss.as.domain.management.security] (MSC service thread 1-7) WFLYDM0111: Keystore /path/to/jboss/standalone/configuration/application.keystore not found, it will be auto generated on first use with a self signed certificate for host localhost 15:26:10,076 WARN [org.jboss.as.domain.management.security] (MSC service thread 1-2) WFLYDM0113: Generated self signed certificate at /path/to/jboss/configuration/application.keystore. Please note that self signed certificates are not secure, and should only be used for testing purposes. Do not use this self signed certificate in production. SHA-1 fingerprint of the generated key is 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff:00:11:22:33 SHA-256 fingerprint of the generated key is 00:11:22:33:44:55:66:77:88:99:00:aa:bb:cc:dd:ee:ff:00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee", "<security-realm name=\"ApplicationRealm\"> <server-identities> <ssl> <keystore path=\"application.keystore\" relative-to=\"jboss.server.config.dir\" keystore-password=\"password\" alias=\"server\" key-password=\"password\" generate-self-signed-certificate-host=\"localhost\"/> </ssl> </server-identities> </security-realm>", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\"> <server name=\"default-server\"> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\">", "security enable-ssl-http-server --interactive Please provide required pieces of information to enable SSL: Key-store file name (default default-server.keystore): keystore.jks Password (blank generated): secret What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct y/n [y]? Validity (in days, blank default): 365 Alias (blank generated): localhost Enable SSL Mutual Authentication y/n (blank n): n SSL options: key store file: keystore.jks distinguished name: CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown password: secret validity: 365 alias: localhost Server keystore file keystore.jks, certificate file keystore.pem and keystore.csr file will be generated in server configuration directory. Do you confirm y/n: y", "/subsystem=elytron/key-store=httpsKS:add(path=/path/to/keystore.jks, credential-reference={clear-text=secret}, type=JKS)", "/subsystem=elytron/key-store=httpsKS:generate-key-pair(alias=localhost,algorithm=RSA,key-size=1024,validity=365,credential-reference={clear-text=secret},distinguished-name=\"CN=localhost\") /subsystem=elytron/key-store=httpsKS:store()", "/subsystem=elytron/key-manager=httpsKM:add(key-store=httpsKS,credential-reference={clear-text=secret})", "/subsystem=elytron/server-ssl-context=httpsSSC:add(key-manager=httpsKM, protocols=[\"TLSv1.2\"])", "/subsystem=undertow/server=default-server/https-listener=https:read-attribute(name=security-realm) { \"outcome\" => \"success\", \"result\" => \"ApplicationRealm\" }", "batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value=httpsSSC) run-batch", "reload", "security disable-ssl-http-server", "keytool -genkeypair -alias client -keyalg RSA -keysize 1024 -validity 365 -keystore client.keystore.jks -dname \"CN=client\" -keypass secret -storepass secret", "keytool -exportcert -keystore client.keystore.jks -alias client -keypass secret -storepass secret -file /path/to/client.cer", "security enable-ssl-http-server --interactive Please provide required pieces of information to enable SSL: Key-store file name (default default-server.keystore): server.keystore.jks Password (blank generated): secret What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: What is the name of your organization? [Unknown]: What is the name of your City or Locality? [Unknown]: What is the name of your State or Province? [Unknown]: What is the two-letter country code for this unit? [Unknown]: Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct y/n [y]? Validity (in days, blank default): 365 Alias (blank generated): localhost Enable SSL Mutual Authentication y/n (blank n): y Client certificate (path to pem file): /path/to/client.cer Validate certificate y/n (blank y): n Trust-store file name (management.truststore): server.truststore.jks Password (blank generated): secret SSL options: key store file: server.keystore.jks distinguished name: CN=localhost, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown password: secret validity: 365 alias: localhost client certificate: /path/to/client.cer trust store file: server.trustore.jks trust store password: secret Server keystore file server.keystore.jks, certificate file server.pem and server.csr file will be generated in server configuration directory. Server truststore file server.trustore.jks will be generated in server configuration directory. Do you confirm y/n: y", "/subsystem=elytron/key-store=twoWayKS:add(path= /PATH/TO /server.keystore.jks,credential-reference={clear-text=secret},type=JKS) /subsystem=elytron/key-store=twoWayKS:generate-key-pair(alias=localhost,algorithm=RSA,key-size=1024,validity=365,credential-reference={clear-text=secret},distinguished-name=\"CN=localhost\") /subsystem=elytron/key-store=twoWayKS:store()", "/subsystem=elytron/key-store=twoWayKS:add(path=server.keystore.jks,relative-to=jboss.server.config.dir,credential-reference={clear-text=secret},type=JKS)", "/subsystem=elytron/key-store=twoWayKS:export-certificate(alias=localhost,path=/path/to/server.cer,pem=true)", "/subsystem=elytron/key-store=twoWayTS:add(path=/path/to/server.truststore.jks,credential-reference={clear-text=secret},type=JKS) /subsystem=elytron/key-store=twoWayTS:import-certificate(alias=client,path=/path/to/client.cer,credential-reference={clear-text=secret},trust-cacerts=true,validate=false) /subsystem=elytron/key-store=twoWayTS:store()", "/subsystem=elytron/key-manager=twoWayKM:add(key-store=twoWayKS, credential-reference={clear-text=secret})", "/subsystem=elytron/trust-manager=twoWayTM:add(key-store=twoWayTS)", "/subsystem=elytron/server-ssl-context=twoWaySSC:add(key-manager=twoWayKM, protocols=[\"TLSv1.2\"], trust-manager=twoWayTM, need-client-auth=true)", "/subsystem=undertow/server=default-server/https-listener=https:read-attribute(name=security-realm) { \"outcome\" => \"success\", \"result\" => \"ApplicationRealm\" }", "batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value=twoWaySSC) run-batch", "reload", "keytool -importcert -keystore client.truststore.jks -storepass secret -alias localhost -trustcacerts -file /path/to/server.cer", "security disable-ssl-http-server", "/subsystem=elytron/trust-manager=twoWayTM:write-attribute(name=certificate-revocation-list,value={})", "/subsystem=elytron/trust-manager=twoWayTM:write-attribute(name=certificate-revocation-list.path, value=intermediate.crl.pem)", "/subsystem=elytron/trust-manager=twoWayTM:write-attribute(name=ocsp.prefer-crls,value=\"true\")", "/subsystem=elytron/trust-manager=twoWayTM:write-attribute(name=ocsp,value={})", "/subsystem=elytron/trust-manager=twoWayTM:write-attribute(name=ocsp.responder,value=\"http://example.com/ocsp-responder\")", "batch /core-service=management/security-realm=HTTPSRealm:add /core-service=management/security-realm=HTTPSRealm/server-identity=ssl:add(keystore-path=identity.jks, keystore-relative-to=jboss.server.config.dir, keystore-password=password1, alias=appserver) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=security-realm, value=HTTPSRealm) run-batch", "/subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=security-realm, value=CertificateRealm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=verify-client, value=REQUIRED)", "security enable-http-auth-http-server --security-domain= SECURITY_DOMAIN Server reloaded. Command success. Authentication configured for security domain SECURITY_DOMAIN http authentication-factory=application-http-authentication security-domain= SECURITY_DOMAIN", "security disable-http-auth-http-server --security-domain= SECURITY_DOMAIN", "security disable-http-auth-http-server --mechanism= MECHANISM --security-domain= SECURITY_DOMAIN", "/subsystem=elytron/configurable-sasl-server-factory=mySASLServerFactory:add(sasl-server-factory=elytron) /subsystem=elytron/sasl-authentication-factory=mySASLAuthFactory:add(sasl-server-factory=mySASLServerFactory,security-domain=ApplicationDomain,mechanism-configurations=[{mechanism-name=DIGEST-MD5,mechanism-realm-configurations=[{realm-name=ApplicationRealm}]}])", "/subsystem=elytron/authentication-configuration=myAuthConfig:write-attribute(name=sasl-mechanism-selector,value=\"DIGEST-MD5\")", "<configuration> <authentication-client xmlns=\"urn:elytron:client:1.2\"> <authentication-rules> <rule use-configuration=\"default\" /> </authentication-rules> <authentication-configurations> <configuration name=\"default\"> <sasl-mechanism-selector selector=\"#ALL\" /> </configuration> </authentication-configurations> </authentication-client> </configuration>", "/subsystem=elytron/configurable-sasl-server-factory=mySASLServerFactory:map-put(name=properties,key=com.sun.security.sasl.digest.utf8,value=false)", "/subsystem=elytron/authentication-configuration=myAuthConfig:map-put(name=mechanism-properties,key=wildfly.sasl.local-user.quiet-auth,value=true)", "<authentication-configurations> <configuration name=\"default\"> <sasl-mechanism-selector selector=\"#ALL\" /> <set-mechanism-properties> <property key=\"wildfly.sasl.local-user.quiet-auth\" value=\"true\" /> </set-mechanism-properties> </configuration> </authentication-configurations>", "/subsystem=elytron/client-ssl-context=modcluster-client-ssl-context:add(trust-manager=default-trust-manager)", "/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=ssl-context, value=modcluster-client-ssl-context)", "/subsystem=undertow/configuration=filter/mod-cluster=modcluster:write-attribute(name=ssl-context,value=modcluster-client-ssl-context)", "reload", "/subsystem=elytron/key-store=twoWayKS:add(path=/path/to/client.keystore.jks, credential-reference={clear-text=secret},type=JKS)", "/subsystem=elytron/key-manager=twoWayKM:add(key-store=twoWayKS, algorithm=\"SunX509\", credential-reference={clear-text=secret})", "/subsystem=elytron/client-ssl-context=modcluster-client-ssl-context:add(trust-manager=default-trust-manager, key-manager=twoWayKM)", "/subsystem=elytron/client-ssl-context=modcluster-client-ssl-context:write-attribute(name=key-manager, value=twoWayKM)", "reload", "/subsystem=remoting/connector= CONNECTOR_NAME :add(sasl-authentication-factory= SASL_FACTORY_NAME ,socket-binding= SOCKET_BINDING_NAME )", "/subsystem=remoting/connector= CONNECTOR_NAME :add(sasl-authentication-factory= SASL_FACTORY_NAME ,socket-binding= SOCKET_BINDING_NAME ,ssl-context= SSL_CONTEXT_NAME )", "/socket-binding-group=standard-sockets/socket-binding=oneWayBinding:add(port=11199)", "/subsystem=remoting/connector=oneWayConnector:add(sasl-authentication-factory= SASL_FACTORY ,socket-binding=oneWayBinding,ssl-context= SSL_CONTEXT )", "/socket-binding-group=standard-sockets/socket-binding=twoWayBinding:add(port=11199)", "/subsystem=remoting/connector=twoWayConnector:add(sasl-authentication-factory= SASL_FACTORY ,socket-binding=twoWayBinding,ssl-context= SSL_CONTEXT )", "/subsystem=remoting/http-connector= HTTP_CONNECTOR_NAME :add(connector-ref= CONNECTOR_NAME ,sasl-authentication-factory= SASL_FACTORY_NAME )", "/subsystem=undertow/server=default-server/https-listener=https:read-attribute(name=security-realm) { \"outcome\" => \"success\", \"result\" => \"ApplicationRealm\" }", "batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value= SERVER_SSL_CONTEXT ) run-batch", "/subsystem=remoting/http-connector=ssl-http-connector:add(connector-ref=https,sasl-authentication-factory= SASL_FACTORY )", "reload", "/subsystem=undertow/server=default-server/https-listener=https:read-attribute(name=security-realm) { \"outcome\" => \"success\", \"result\" => \"ApplicationRealm\" }", "batch /subsystem=undertow/server=default-server/https-listener=https:undefine-attribute(name=security-realm) /subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=ssl-context, value= SERVER_SSL_CONTEXT ) run-batch", "/subsystem=remoting/http-connector=ssl-http-connector:add(connector-ref=https,sasl-authentication-factory= SASL_FACTORY )", "reload", "/subsystem=remoting/remote-outbound-connection= OUTBOUND_CONNECTION_NAME :add(authentication-context= AUTHENTICATION_CONTEXT_NAME , outbound-socket-binding-ref= OUTBOUND_SOCKET_BINDING_NAME )", "/subsystem=elytron/dir-context=exampleDC:add(url=\"ldap://127.0.0.1:10389\", principal=\"uid=admin,ou=system\", credential-reference={clear-text=\"secret\"})", "/subsystem=elytron/ldap-key-store=ldapKS:add(dir-context=exampleDC, search-path=\"ou=Keystores,dc=wildfly,dc=org\")", "/subsystem=elytron/key-store=myKS:add(path=keystore.jks, relative-to=jboss.server.config.dir, credential-reference={clear-text=secret}, type=JKS)", "/subsystem=elytron/filtering-key-store=filterKS:add(key-store=myKS, alias-filter=\"alias1,alias3\")", "/subsystem=elytron/key-store=httpsKS:load", "/subsystem=elytron/key-store=httpsKS:load()", "/subsystem=elytron/key-manager=httpsKM:init()", "/subsystem=elytron/key-store=httpsKS:load()", "/subsystem=elytron/trust-manager=httpsTM:init()", "/subsystem=elytron/key-store=httpsKS/:read-alias(alias=localhost)", "/subsystem=elytron/client-ssl-context=exampleCSC:add(key-manager=clientKM, trust-manager=clientTM, protocols=[\"TLSv1.2\"])", "/subsystem=elytron/server-ssl-context=newServerSSLContext:add(key-manager= KEY_MANAGER ,protocols=[\"TLSv1.2\"])", "/subsystem=elytron/server-ssl-sni-context= SERVER_SSL_SNI_CONTEXT :add(default-ssl-context= DEFAULT_SERVER_SSL_CONTEXT ,host-context-map={ HOSTNAME = SERVER_SSL_CONTEXT ,...})", "/subsystem=elytron/server-ssl-sni-context=exampleSNIContext:add(default-ssl-context=serverSSL,host-context-map={www\\\\.example\\\\.com=exampleSSL})", "module add --name= MODULE_NAME --resources= FACTORY_JAR --dependencies=javax.api, DEPENDENCY_LIST", "/subsystem=elytron/provider-loader= LOADER_NAME :add(class-names=[ CLASS_NAME ],module= MODULE_NAME )", "/subsystem=elytron/provider-loader= LOADER_NAME :add(module= MODULE_NAME )", "/subsystem=elytron/ COMPONENT_NAME = NEW_COMPONENT :add(providers= LOADER_NAME ,...)", "/subsystem=elytron/trust-manager=newTrustManager:add(algorithm=MyX509,providers=customProvider,key-store=sampleKeystore)", "void initialize(final Map<String, String> configuration);", "/subsystem=elytron/ COMPONENT_NAME = NEW_COMPONENT :add(class-name= CLASS_NAME ,module= MODULE_NAME ,configuration={myAttribute=\"myValue\"}", "import org.kohsuke.MetaInfServices; import javax.net.ssl.TrustManagerFactory; import java.security.Provider; import java.util.Collections; import java.util.List; import java.util.Map; @MetaInfServices(Provider.class) public class CustomProvider extends Provider { public CustomProvider() { super(\"CustomProvider\", 1.0, \"Demo provider\"); System.out.println(\"CustomProvider initialization.\"); final List<String> emptyList = Collections.emptyList(); final Map<String, String> emptyMap = Collections.emptyMap(); putService(new Service(this, TrustManagerFactory.class.getSimpleName(),\"CustomAlgorithm\", CustomTrustManagerFactorySpi.class.getName(), emptyList, emptyMap)); } }", "import javax.net.ssl.SSLEngine; import javax.net.ssl.X509ExtendedTrustManager; import java.net.Socket; import java.security.cert.CertificateException; import java.security.cert.X509Certificate; public class CustomTrustManager extends X509ExtendedTrustManager { public void checkClientTrusted(X509Certificate[] x509Certificates, String s, Socket socket) throws CertificateException { // Insert your code here } public void checkServerTrusted(X509Certificate[] x509Certificates, String s, Socket socket) throws CertificateException { // Insert your code here } public void checkClientTrusted(X509Certificate[] x509Certificates, String s, SSLEngine sslEngine) throws CertificateException { // Insert your code here } public void checkServerTrusted(X509Certificate[] x509Certificates, String s, SSLEngine sslEngine) throws CertificateException { // Insert your code here } public void checkClientTrusted(X509Certificate[] x509Certificates, String s) throws CertificateException { // Insert your code here } public void checkServerTrusted(X509Certificate[] x509Certificates, String s) throws CertificateException { // Insert your code here } public X509Certificate[] getAcceptedIssuers() { // Insert your code here } }", "import javax.net.ssl.ManagerFactoryParameters; import javax.net.ssl.TrustManager; import javax.net.ssl.TrustManagerFactorySpi; import java.security.InvalidAlgorithmParameterException; import java.security.KeyStore; import java.security.KeyStoreException; public class CustomTrustManagerFactorySpi extends TrustManagerFactorySpi { protected void engineInit(KeyStore keyStore) throws KeyStoreException { // Insert your code here } protected void engineInit(ManagerFactoryParameters managerFactoryParameters) throws InvalidAlgorithmParameterException { // Insert your code here } protected CustomTrustManager[] engineGetTrustManagers() { // Insert your code here } }", "javax.net.ssl.SSLContext.getDefault();", "/subsystem=elytron:write-attribute(name=default-ssl-context, value=client-context)", ":reload", "/subsystem=elytron:undefine-attribute(name=default-ssl-context) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-restart\" => true, \"process-state\" => \"restart-required\" } }", "/subsystem=elytron/trust-manager= TRUST_MANAGER :write-attribute(name=certificate-revocation-list,value={path= /path/to/CRL_FILE .crl.pem}", "/subsystem=elytron/trust-manager= TRUST_MANAGER :reload-certificate-revocation-list", "/subsystem=elytron/certificate-authority-account= CERTIFICATE_ACCOUNT :add(key-store= KEYSTORE ,alias= ALIAS ,contact-urls=[mailto: EMAIL_ADDRESS ])", "/subsystem=elytron/certificate-authority-account= CERTIFICATE_ACCOUNT :create-account(agree-to-terms-of-service=true)", "/subsystem=elytron/certificate-authority-account= CERTIFICATE_ACCOUNT :update-account(agree-to-terms-of-service=true)", "/subsystem=elytron/certificate-authority-account= CERTIFICATE_ACCOUNT :change-account-key()", "/subsystem=elytron/certificate-authority-account= CERTIFICATE_ACCOUNT :deactivate-account()", "/subsystem=elytron/certificate-authority-account= CERTIFICATE_ACCOUNT :get-metadata()", "/subsystem=elytron/key-store=httpsKS:add(path=/path/to/server.keystore.jks,credential-reference={clear-text=secret},type=JKS) /subsystem=elytron/key-store=httpsKS:generate-key-pair(alias=example,algorithm=RSA,key-size=1024,validity=365,credential-reference={clear-text=secret},distinguished-name=\"CN=www.example.com\")", "/subsystem=elytron/key-store=httpsKS:generate-certificate-signing-request(alias=example,path=server.csr,relative-to=jboss.server.config.dir,distinguished-name=\"CN=www.example.com\",extensions=[{critical=false,name=KeyUsage,value=digitalSignature}],credential-reference={clear-text=secret})", "/subsystem=elytron/key-store=httpsKS:import-certificate(alias=example,path=/path/to/certificate_or_chain/file,relative-to=jboss.server.config.dir,credential-reference={clear-text=secret},trust-cacerts=true)", "/subsystem=elytron/key-store=httpsKS:export-certificate(alias=example,path=serverCert.cer,relative-to=jboss.server.config.dir,pem=true)", "/subsystem=elytron/key-store=httpsKS:change-alias(alias=example,new-alias=newExample,credential-reference={clear-text=secret})", "/subsystem=elytron/key-store=httpsKS:store()", "/subsystem=elytron/key-store= KEYSTORE :obtain-certificate(alias= ALIAS ,domain-names=[ DOMAIN_NAME ],certificate-authority-account= CERTIFICATE_ACCOUNT ,agree-to-terms-of-service=true,algorithm=RSA,credential-reference={clear-text=secret})", "/subsystem=elytron/key-store= KEYSTORE :revoke-certificate(alias= ALIAS ,certificate-authority-account= CERTIFICATE_ACCOUNT )", "/subsystem=elytron/key-store= KEYSTORE :should-renew-certificate(alias= ALIAS ,expiration=7)", "keytool -printcert -file /path/to/certificate/certificate.cert", "SubjectAlternativeName [ DNS:one.example.org IP Address:127.0.0.1 ]", "/subsystem=elytron/x509-subject-alt-name-evidence-decoder=exampleDnsDecoder:add(alt-name-type=__EXTENSION_TO_USE__)", "/subsystem=elytron/security-domain=__Security_Domain_Name__:write-attribute(name=\"evidence-decoder\",value=\"exampleDnsDecoder\")", "/subsystem=elytron/aggregate-evidence-decoder=aggregateDecoder:add(evidence-decoders=[__DECODER_1__,__DECODER_2__,...,__DECODER_N__])", "/subsystem=elytron/security-domain=__SECURITY_DOMAIN__:write-attribute(name=\"evidence-decoder\",value=\"aggregateDecoder\")", "/subsystem=elytron/x500-subject-evidence-decoder=exampleSubjectDecoder:add()", "/subsystem=elytron/custom-evidence-decoder=myCustomEvidenceDecoder:add(module=__MODULE_NAME__, class-name=__FULLY_QUALIFIED_CLASS_NAME__)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_configure_server_security/securing_the_server_and_its_interfaces
F.2. Configuring HA-LVM Failover with Tagging
F.2. Configuring HA-LVM Failover with Tagging To set up HA-LVM failover by using tags in the /etc/lvm/lvm.conf file, perform the following steps: In the global section of the /etc/lvm/lvm.conf file, ensure that the locking_type parameter is set to the value '1' and the use_lvmetad parameter is set to the value '0'. Note As of Red Hat Enterprise Linux 6.7, you can use the --enable-halvm option of the lvmconf to set the locking type to 1 and disable lvmetad . For information on the lvmconf command, see the lvmconf man page. Create the logical volume and file system using standard LVM and file system commands, as in the following example. For information on creating LVM logical volumes, refer to Logical Volume Manager Administration . Edit the /etc/cluster/cluster.conf file to include the newly created logical volume as a resource in one of your services. Alternately, you can use Conga or the ccs command to configure LVM and file system resources for the cluster. The following is a sample resource manager section from the /etc/cluster/cluster.conf file that configures a CLVM logical volume as a cluster resource: Note If there are multiple logical volumes in the volume group, then the logical volume name ( lv_name ) in the lvm resource should be left blank or unspecified. Also note that in an HA-LVM configuration, a volume group may be used by only a single service. Edit the volume_list field in the /etc/lvm/lvm.conf file. Include the name of your root volume group and your host name as listed in the /etc/cluster/cluster.conf file preceded by @. The host name to include here is the machine on which you are editing the lvm.conf file, not any remote host name. Note that this string MUST match the node name given in the cluster.conf file. Below is a sample entry from the /etc/lvm/lvm.conf file: This tag will be used to activate shared VGs or LVs. DO NOT include the names of any volume groups that are to be shared using HA-LVM. Update the initramfs device on all your cluster nodes: Reboot all nodes to ensure the correct initramfs image is in use.
[ "pvcreate /dev/sd[cde]1 vgcreate shared_vg /dev/sd[cde]1 lvcreate -L 10G -n ha_lv shared_vg mkfs.ext4 /dev/shared_vg/ha_lv", "<rm> <failoverdomains> <failoverdomain name=\"FD\" ordered=\"1\" restricted=\"0\"> <failoverdomainnode name=\"neo-01\" priority=\"1\"/> <failoverdomainnode name=\"neo-02\" priority=\"2\"/> </failoverdomain> </failoverdomains> <resources> <lvm name=\"lvm\" vg_name=\"shared_vg\" lv_name=\"ha_lv\"/> <fs name=\"FS\" device=\"/dev/shared_vg/ha_lv\" force_fsck=\"0\" force_unmount=\"1\" fsid=\"64050\" fstype=\"ext4\" mountpoint=\"/mnt\" options=\"\" self_fence=\"0\"/> </resources> <service autostart=\"1\" domain=\"FD\" name=\"serv\" recovery=\"relocate\"> <lvm ref=\"lvm\"/> <fs ref=\"FS\"/> </service> </rm>", "volume_list = [ \"VolGroup00\", \"@neo-01\" ]", "dracut -H -f /boot/initramfs-USD(uname -r).img USD(uname -r)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-halvm-tagging-ca
Chapter 6. Additional Resources
Chapter 6. Additional Resources This chapter provides references to other relevant sources of information about Red Hat Software Collections 3.1 and Red Hat Enterprise Linux. 6.1. Red Hat Product Documentation The following documents are directly or indirectly relevant to this book: Red Hat Software Collections 3.1 Packaging Guide - The Packaging Guide for Red Hat Software Collections explains the concept of Software Collections, documents the scl utility, and provides a detailed explanation of how to create a custom Software Collection or extend an existing one. Red Hat Developer Toolset 7.1 Release Notes - The Release Notes for Red Hat Developer Toolset document known problems, possible issues, changes, and other important information about this Software Collection. Red Hat Developer Toolset 7.1 User Guide - The User Guide for Red Hat Developer Toolset contains more information about installing and using this Software Collection. Using Red Hat Software Collections Container Images - This book provides information on how to use container images based on Red Hat Software Collections. The available container images include applications, daemons, databases, as well as the Red Hat Developer Toolset container images. The images can be run on Red Hat Enterprise Linux 7 Server and Red Hat Enterprise Linux Atomic Host. Get Started with Containers - This guide contains a comprehensive overview of information about building and using docker-formatted container images on Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux Atomic Host. Using and Configuring Red Hat Subscription Manager - The Using and Configuring Red Hat Subscription Manager book provides detailed information on how to register Red Hat Enterprise Linux systems, manage subscriptions, and view notifications for the registered systems. Red Hat Enterprise Linux 6 Deployment Guide - The Deployment Guide for Red Hat Enterprise Linux 6 provides relevant information regarding the deployment, configuration, and administration of this system. Red Hat Enterprise Linux 7 System Administrator's Guide - The System Administrator's Guide for Red Hat Enterprise Linux 7 provides information on deployment, configuration, and administration of this system. 6.2. Red Hat Developers Red Hat Developer Program - The Red Hat Developers community portal. Overview of Red Hat Software Collections on Red Hat Developers - The Red Hat Developers portal provides a number of tutorials to get you started with developing code using different development technologies. This includes the Node.js, Perl, PHP, Python, and Ruby Software Collections. Red Hat Enterprise Linux Developer Program - The Red Hat Enterprise Linux Developer Program delivers industry-leading developer tools, instructional resources, and an ecosystem of experts to help programmers maximize productivity in building Linux applications. Red Hat Developer Blog - The Red Hat Developer Blog contains up-to-date information, best practices, opinion, product and program announcements as well as pointers to sample code and other resources for those who are designing and developing applications based on Red Hat technologies.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.1_release_notes/chap-additional_resources
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_z/making-open-source-more-inclusive
Appendix C. Common Ports
Appendix C. Common Ports The following tables list the most common communication ports used by services, daemons, and programs included in Red Hat Enterprise Linux. This listing can also be found in the /etc/services file. For the official list of Well Known, Registered, and Dynamic ports as designated by the Internet Assigned Numbers Authority (IANA), refer to the following URL: http://www.iana.org/assignments/port-numbers Note The Layer , where listed, denotes whether the service or protocol uses TCP or UDP for transport. If not listed, the service/protocol can use both TCP and UDP. Table C.1, "Well Known Ports" lists the Well Known Ports as defined by IANA and is used by Red Hat Enterprise Linux as default communication ports for various services, including FTP, SSH, and Samba. Table C.1. Well Known Ports Port # / Layer Name Comment 1 tcpmux TCP port service multiplexer 5 rje Remote Job Entry 7 echo Echo service 9 discard Null service for connection testing 11 systat System Status service for listing connected ports 13 daytime Sends date and time to requesting host 17 qotd Sends quote of the day to connected host 18 msp Message Send Protocol 19 chargen Character Generation service; sends endless stream of characters 20 ftp-data FTP data port 21 ftp File Transfer Protocol (FTP) port; sometimes used by File Service Protocol (FSP) 22 ssh Secure Shell (SSH) service 23 telnet The Telnet service 25 smtp Simple Mail Transfer Protocol (SMTP) 37 time Time Protocol 39 rlp Resource Location Protocol 42 nameserver Internet Name Service 43 nicname WHOIS directory service 49 tacacs Terminal Access Controller Access Control System for TCP/IP based authentication and access 50 re-mail-ck Remote Mail Checking Protocol 53 domain domain name services (such as BIND) 63 whois++ WHOIS++, extended WHOIS services 67 bootps Bootstrap Protocol (BOOTP) services; also used by Dynamic Host Configuration Protocol (DHCP) services 68 bootpc Bootstrap (BOOTP) client; also used by Dynamic Host Configuration Protocol (DHCP) clients 69 tftp Trivial File Transfer Protocol (TFTP) 70 gopher Gopher Internet document search and retrieval 71 netrjs-1 Remote Job Service 72 netrjs-2 Remote Job Service 73 netrjs-3 Remote Job Service 73 netrjs-4 Remote Job Service 79 finger Finger service for user contact information 80 http HyperText Transfer Protocol (HTTP) for World Wide Web (WWW) services 88 kerberos Kerberos network authentication system 95 supdup Telnet protocol extension 101 hostname Hostname services on SRI-NIC machines 102/tcp iso-tsap ISO Development Environment (ISODE) network applications 105 csnet-ns Mailbox nameserver; also used by CSO nameserver 107 rtelnet Remote Telnet 109 pop2 Post Office Protocol version 2 110 pop3 Post Office Protocol version 3 111 sunrpc Remote Procedure Call (RPC) Protocol for remote command execution, used by Network Filesystem (NFS) 113 auth Authentication and Ident protocols 115 sftp Simple File Transfer Protocol services 117 uucp-path Unix-to-Unix Copy Protocol (UUCP) Path services 119 nntp Network News Transfer Protocol (NNTP) for the USENET discussion system 123 ntp Network Time Protocol (NTP) 137 netbios-ns NETBIOS Name Service used in Red Hat Enterprise Linux by Samba 138 netbios-dgm NETBIOS Datagram Service used in Red Hat Enterprise Linux by Samba 139 netbios-ssn NETBIOS Session Service used in Red Hat Enterprise Linux by Samba 143 imap Internet Message Access Protocol (IMAP) 161 snmp Simple Network Management Protocol (SNMP) 162 snmptrap Traps for SNMP 163 cmip-man Common Management Information Protocol (CMIP) 164 cmip-agent Common Management Information Protocol (CMIP) 174 mailq MAILQ email transport queue 177 xdmcp X Display Manager Control Protocol (XDMCP) 178 nextstep NeXTStep window server 179 bgp Border Gateway Protocol 191 prospero Prospero distributed filesystem services 194 irc Internet Relay Chat (IRC) 199 smux SNMP UNIX Multiplexer 201 at-rtmp AppleTalk routing 202 at-nbp AppleTalk name binding 204 at-echo AppleTalk echo 206 at-zis AppleTalk zone information 209 qmtp Quick Mail Transfer Protocol (QMTP) 210 z39.50 NISO Z39.50 database 213 ipx Internetwork Packet Exchange (IPX), a datagram protocol commonly used in Novell Netware environments 220 imap3 Internet Message Access Protocol version 3 245 link LINK / 3-DNS iQuery service 347 fatserv FATMEN file and tape management server 363 rsvp_tunnel RSVP Tunnel 369 rpc2portmap Coda file system portmapper 370 codaauth2 Coda file system authentication services 372 ulistproc UNIX LISTSERV 389 ldap Lightweight Directory Access Protocol (LDAP) 427 svrloc Service Location Protocol (SLP) 434 mobileip-agent Mobile Internet Protocol (IP) agent 435 mobilip-mn Mobile Internet Protocol (IP) manager 443 https Secure Hypertext Transfer Protocol (HTTP) 444 snpp Simple Network Paging Protocol 445 microsoft-ds Server Message Block (SMB) over TCP/IP 464 kpasswd Kerberos password and key changing services 468 photuris Photuris session key management protocol 487 saft Simple Asynchronous File Transfer (SAFT) protocol 488 gss-http Generic Security Services (GSS) for HTTP 496 pim-rp-disc Rendezvous Point Discovery (RP-DISC) for Protocol Independent Multicast (PIM) services 500 isakmp Internet Security Association and Key Management Protocol (ISAKMP) 535 iiop Internet Inter-Orb Protocol (IIOP) 538 gdomap GNUstep Distributed Objects Mapper (GDOMAP) 546 dhcpv6-client Dynamic Host Configuration Protocol (DHCP) version 6 client 547 dhcpv6-server Dynamic Host Configuration Protocol (DHCP) version 6 Service 554 rtsp Real Time Stream Control Protocol (RTSP) 563 nntps Network News Transport Protocol over Secure Sockets Layer (NNTPS) 565 whoami whoami user ID listing 587 submission Mail Message Submission Agent (MSA) 610 npmp-local Network Peripheral Management Protocol (NPMP) local / Distributed Queueing System (DQS) 611 npmp-gui Network Peripheral Management Protocol (NPMP) GUI / Distributed Queueing System (DQS) 612 hmmp-ind HyperMedia Management Protocol (HMMP) Indication / DQS 631 ipp Internet Printing Protocol (IPP) 636 ldaps Lightweight Directory Access Protocol over Secure Sockets Layer (LDAPS) 674 acap Application Configuration Access Protocol (ACAP) 694 ha-cluster Heartbeat services for High-Availability Clusters 749 kerberos-adm Kerberos version 5 (v5) 'kadmin' database administration 750 kerberos-iv Kerberos version 4 (v4) services 765 webster Network Dictionary 767 phonebook Network Phonebook 873 rsync rsync file transfer services 992 telnets Telnet over Secure Sockets Layer (TelnetS) 993 imaps Internet Message Access Protocol over Secure Sockets Layer (IMAPS) 994 ircs Internet Relay Chat over Secure Sockets Layer (IRCS) 995 pop3s Post Office Protocol version 3 over Secure Sockets Layer (POP3S) Table C.2, "UNIX Specific Ports" lists UNIX-specific ports and cover services ranging from email to authentication and more. Names enclosed in brackets (for example, [ service ]) are either daemon names for the service or common alias(es). Table C.2. UNIX Specific Ports Port # / Layer Name Comment 512/tcp exec Authentication for remote process execution 512/udp biff [comsat] Asynchrous mail client (biff) and service (comsat) 513/tcp login Remote Login (rlogin) 513/udp who [whod] whod user logging daemon 514/tcp shell [cmd] Remote shell (rshell) and remote copy (rcp) with no logging 514/udp syslog UNIX system logging service 515 printer [spooler] Line printer (lpr) spooler 517/udp talk Talk remote calling service and client 518/udp ntalk Network talk (ntalk) remote calling service and client 519 utime [unixtime] UNIX time (utime) protocol 520/tcp efs Extended Filename Server (EFS) 520/udp router [route, routed] Routing Information Protocol (RIP) 521 ripng Routing Information Protocol for Internet Protocol version 6 (IPv6) 525 timed [timeserver] Time daemon (timed) 526/tcp tempo [newdate] Tempo 530/tcp courier [rpc] Courier Remote Procedure Call (RPC) protocol 531/tcp conference [chat] Internet Relay Chat 532 netnews Netnews newsgroup service 533/udp netwall Netwall for emergency broadcasts 540/tcp uucp [uucpd] UNIX-to-UNIX copy services 543/tcp klogin Kerberos version 5 (v5) remote login 544/tcp kshell Kerberos version 5 (v5) remote shell 548 afpovertcp Appletalk Filing Protocol (AFP) over Transmission Control Protocol (TCP) 556 remotefs [rfs_server, rfs] Brunhoff's Remote Filesystem (RFS) Table C.3, "Registered Ports" lists ports submitted by the network and software community to the IANA for formal registration into the port number list. Table C.3. Registered Ports Port # / Layer Name Comment 1080 socks SOCKS network application proxy services 1236 bvcontrol [rmtcfg] Remote configuration server for Gracilis Packeten network switches [a] 1300 h323hostcallsc H.323 telecommunication Host Call Secure 1433 ms-sql-s Microsoft SQL Server 1434 ms-sql-m Microsoft SQL Monitor 1494 ica Citrix ICA Client 1512 wins Microsoft Windows Internet Name Server 1524 ingreslock Ingres Database Management System (DBMS) lock services 1525 prospero-np Prospero non-privileged 1645 datametrics [old-radius] Datametrics / old radius entry 1646 sa-msg-port [oldradacct] sa-msg-port / old radacct entry 1649 kermit Kermit file transfer and management service 1701 l2tp [l2f] Layer 2 Tunneling Protocol (LT2P) / Layer 2 Forwarding (L2F) 1718 h323gatedisc H.323 telecommunication Gatekeeper Discovery 1719 h323gatestat H.323 telecommunication Gatekeeper Status 1720 h323hostcall H.323 telecommunication Host Call setup 1758 tftp-mcast Trivial FTP Multicast 1759/udp mtftp Multicast Trivial FTP (MTFTP) 1789 hello Hello router communication protocol 1812 radius Radius dial-up authentication and accounting services 1813 radius-acct Radius Accounting 1911 mtp Starlight Networks Multimedia Transport Protocol (MTP) 1985 hsrp Cisco Hot Standby Router Protocol 1986 licensedaemon Cisco License Management Daemon 1997 gdp-port Cisco Gateway Discovery Protocol (GDP) 2049 nfs [nfsd] Network File System (NFS) 2102 zephyr-srv Zephyr distributed messaging Server 2103 zephyr-clt Zephyr client 2104 zephyr-hm Zephyr host manager 2401 cvspserver Concurrent Versions System (CVS) client/server operations 2430/tcp venus Venus cache manager for Coda file system (codacon port) 2430/udp venus Venus cache manager for Coda file system (callback/wbc interface) 2431/tcp venus-se Venus Transmission Control Protocol (TCP) side effects 2431/udp venus-se Venus User Datagram Protocol (UDP) side effects 2432/udp codasrv Coda file system server port 2433/tcp codasrv-se Coda file system TCP side effects 2433/udp codasrv-se Coda file system UDP SFTP side effect 2600 hpstgmgr [zebrasrv] Zebra routing [b] 2601 discp-client [zebra] discp client; Zebra integrated shell 2602 discp-server [ripd] discp server; Routing Information Protocol daemon (ripd) 2603 servicemeter [ripngd] Service Meter; RIP daemon for IPv6 2604 nsc-ccs [ospfd] NSC CCS; Open Shortest Path First daemon (ospfd) 2605 nsc-posa NSC POSA; Border Gateway Protocol daemon (bgpd) 2606 netmon [ospf6d] Dell Netmon; OSPF for IPv6 daemon (ospf6d) 2809 corbaloc Common Object Request Broker Architecture (CORBA) naming service locator 3130 icpv2 Internet Cache Protocol version 2 (v2); used by Squid proxy caching server 3306 mysql MySQL database service 3346 trnsprntproxy Transparent proxy 4011 pxe Pre-execution Environment (PXE) service 4321 rwhois Remote Whois (rwhois) service 4444 krb524 Kerberos version 5 (v5) to version 4 (v4) ticket translator 5002 rfe Radio Free Ethernet (RFE) audio broadcasting system 5308 cfengine Configuration engine (Cfengine) 5999 cvsup [CVSup] CVSup file transfer and update tool 6000/tcp x11 [X] X Window System services 7000 afs3-fileserver Andrew File System (AFS) file server 7001 afs3-callback AFS port for callbacks to cache manager 7002 afs3-prserver AFS user and group database 7003 afs3-vlserver AFS volume location database 7004 afs3-kaserver AFS Kerberos authentication service 7005 afs3-volser AFS volume management server 7006 afs3-errors AFS error interpretation service 7007 afs3-bos AFS basic overseer process 7008 afs3-update AFS server-to-server updater 7009 afs3-rmtsys AFS remote cache manager service 9876 sd Session Director for IP multicast conferencing 10080 amanda Advanced Maryland Automatic Network Disk Archiver (Amanda) backup services 11371 pgpkeyserver Pretty Good Privacy (PGP) / GNU Privacy Guard (GPG) public keyserver 11720 h323callsigalt H.323 Call Signal Alternate 13720 bprd Veritas NetBackup Request Daemon (bprd) 13721 bpdbm Veritas NetBackup Database Manager (bpdbm) 13722 bpjava-msvc Veritas NetBackup Java / Microsoft Visual C++ (MSVC) protocol 13724 vnetd Veritas network utility 13782 bpcd Veritas NetBackup 13783 vopied Veritas VOPIE authentication daemon 22273 wnn6 [wnn4] Kana/Kanji conversion system [c] 26000 quake Quake (and related) multi-player game servers 26208 wnn6-ds Wnn6 Kana/Kanji server 33434 traceroute Traceroute network tracking tool [a] Comment from /etc/services : "Port 1236 is registered as `bvcontrol', but is also used by the Gracilis Packeten remote config server. The official name is listed as the primary name, with the unregistered name as an alias." [b] Comment from /etc/services : "Ports numbered 2600 through 2606 are used by the zebra package without being registered. The primary names are the registered names, and the unregistered names used by zebra are listed as aliases." [c] Comment from /etc/services : "This port is registered as wnn6, but also used under the unregistered name 'wnn4' by the FreeWnn package." Table C.4, "Datagram Deliver Protocol Ports" is a listing of ports related to the Datagram Delivery Protocol (DDP) used on AppleTalk networks. Table C.4. Datagram Deliver Protocol Ports Port # / Layer Name Comment 1/ddp rtmp Routing Table Management Protocol 2/ddp nbp Name Binding Protocol 4/ddp echo AppleTalk Echo Protocol 6/ddp zip Zone Information Protocol Table C.5, "Kerberos (Project Athena/MIT) Ports" is a listing of ports related to the Kerberos network authentication protocol. Where noted, v5 refers to the Kerberos version 5 protocol. Note that these ports are not registered with the IANA. Table C.5. Kerberos (Project Athena/MIT) Ports Port # / Layer Name Comment 751 kerberos_master Kerberos authentication 752 passwd_server Kerberos Password (kpasswd) server 754 krb5_prop Kerberos v5 slave propagation 760 krbupdate [kreg] Kerberos registration 1109 kpop Kerberos Post Office Protocol (KPOP) 2053 knetd Kerberos de-multiplexor 2105 eklogin Kerberos v5 encrypted remote login (rlogin) Table C.6, "Unregistered Ports" is a listing of unregistered ports that are used by services and protocols that may be installed on your Red Hat Enterprise Linux system, or that is necessary for communication between Red Hat Enterprise Linux and other operating systems. Table C.6. Unregistered Ports Port # / Layer Name Comment 15/tcp netstat Network Status (netstat) 98/tcp linuxconf Linuxconf Linux administration tool 106 poppassd Post Office Protocol password change daemon (POPPASSD) 465/tcp smtps Simple Mail Transfer Protocol over Secure Sockets Layer (SMTPS) 616/tcp gii Gated (routing daemon) Interactive Interface 808 omirr [omirrd] Online Mirror (Omirr) file mirroring services 871/tcp supfileserv Software Upgrade Protocol (SUP) server 901/tcp swat Samba Web Administration Tool (SWAT) 953 rndc Berkeley Internet Name Domain version 9 (BIND 9) remote configuration tool 1127/tcp supfiledbg Software Upgrade Protocol (SUP) debugging 1178/tcp skkserv Simple Kana to Kanji (SKK) Japanese input server 1313/tcp xtel French Minitel text information system 1529/tcp support [prmsd, gnatsd] GNATS bug tracking system 2003/tcp cfinger GNU finger 2150 ninstall Network Installation Service 2988 afbackup afbackup client-server backup system 3128/tcp squid Squid Web proxy cache 3455 prsvp RSVP port 5432 postgres PostgreSQL database 4557/tcp fax FAX transmission service (old service) 4559/tcp hylafax HylaFAX client-server protocol (new service) 5232 sgi-dgl SGI Distributed Graphics Library 5354 noclog NOCOL network operation center logging daemon (noclogd) 5355 hostmon NOCOL network operation center host monitoring 5680/tcp canna Canna Japanese character input interface 6010/tcp x11-ssh-offset Secure Shell (SSH) X11 forwarding offset 6667 ircd Internet Relay Chat daemon (ircd) 7100/tcp xfs X Font Server (XFS) 7666/tcp tircproxy Tircproxy IRC proxy service 8008 http-alt Hypertext Tranfer Protocol (HTTP) alternate 8080 webcache World Wide Web (WWW) caching service 8081 tproxy Transparent Proxy 9100/tcp jetdirect [laserjet, hplj] Hewlett-Packard (HP) JetDirect network printing service 9359 mandelspawn [mandelbrot] Parallel mandelbrot spawning program for the X Window System 10081 kamanda Amanda backup service over Kerberos 10082/tcp amandaidx Amanda index server 10083/tcp amidxtape Amanda tape server 20011 isdnlog Integrated Services Digital Network (ISDN) logging system 20012 vboxd ISDN voice box daemon (vboxd) 22305/tcp wnn4_Kr kWnn Korean input system 22289/tcp wnn4_Cn cWnn Chinese input system 22321/tcp wnn4_Tw tWnn Chinese input system (Taiwan) 24554 binkp Binkley TCP/IP Fidonet mailer daemon 27374 asp Address Search Protocol 60177 tfido Ifmail FidoNet compatible mailer service 60179 fido FidoNet electronic mail and news network
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-ports
27.4. Defining Role-Based Access Controls
27.4. Defining Role-Based Access Controls Role-based access control grants a very different kind of authority to users compared to self-service and delegation access controls. Role-based access controls are fundamentally administrative, with the potential to add, delete, and significantly modify entries. There are three parts to role-based access controls: The permission . The permission defines a specific operation or set of operations (write, add, or delete) and the target entries within the IdM LDAP directory to which those operations apply. Permissions are building blocks; they can be assigned to multiple privileges as needed. The privileges available to a role. A privilege is essentially a group of permissions. Permissions are not applied directly to a role. Permissions are added to a privilege so that the privilege creates a coherent and complete picture of a set of access control rules. For example, a permission can be created to add, edit, and delete automount locations. Then that permission can be combined with another permission relating to managing FTP services, and they can be used to create a single privilege that relates to managing filesystems. The role . This is the list of IdM users who are able to perform the actions defined in the privileges. It is possible to create entirely new permissions, as well as to create new privileges based on existing permissions or new permissions. 27.4.1. Creating Roles 27.4.1.1. Creating Roles in the Web UI Open the IPA Server tab in the top menu, and select the Role Based Access Control subtab. Click the Add link at the top of the list of role-based ACIs. Enter the role name and a description. Click the Add and Edit button to save the new role and go to the configuration page. At the top of the Users tab, or in the Users Groups tab when adding groups, click the Add link. Select the users on the left and use the >> button to move them to the assigned box. Open the Privileges tab in the role configuration page. Click the Add link at the top of the list of privileges to add a new privilege. Select the privileges on the left and use the >> button to move them to the assigned box. Click the Add button to save. 27.4.1.2. Creating Roles in the Command Line Add the new role: Add the required privileges to the role: Add the required groups to the role. In this case, we are adding only a single group, useradmin , which already exists. 27.4.2. Creating New Permissions 27.4.2.1. Creating New Permissions from the Web UI Open the IPA Server tab in the top menu, and select the Role Based Access Control subtab. Select the Permissions task link. Click the Add link at the top of the list of permissions. Enter the name of the new permission. Select the checkboxes to the allowed operations for this permission. Select the method to use to identify the target entries from the Target drop-down menu. There are four different methods: Type looks for an entry type like user, host, or service and then provides a list of all possible attributes for that entry type. The attributes which will be accessible through this ACI are selected from the list. Filter uses an LDAP filter to identify which entries the permission applies to. Subtree targets every entry beneath the specified subtree entry. All attributes within the matching entries can be modified. Target group specifies a user group, and all the user entries within that group are available through the ACI. All attributes within the matching entries can be modified. Fill in the required information to identify the target entries, depending on the selected type. For Filter , Subtree , and Target group targets, click the Add link to add attributes that are included in the permission. A single attribute is added at a time; to add multiple attributes, click the Add again to add another field. If no attributes are set for the permission then, by default, all attributes are excluded. Click the Add button to save the permission. 27.4.2.2. Creating New Permissions from the Command Line A new permission is added using the permission-add command. All permissions require a list of attributes over which permission is granted ( --attr ), a list of allowed actions ( --permissions ), and the target entries for the ACI. There are four methods to identify the target entries: --type looks for an entry type like user, host, or service and then provides a list of all possible attributes for that entry type. --filter uses an LDAP filter to identify which entries the permission applies to. --subtree targets every entry beneath the specified subtree entry. --targetgroup specifies a user group, and all the user entries within that group are available through the ACI. Example 27.1. Adding a Permission with a Filter A filter can be any valid LDAP filter. Note The permission-add command does not validate the given LDAP filter. Verify that the filter returns the expected results before configuring the permission. Example 27.2. Adding a Permission for a Subtree All a subtree filter requires is a DN within the directory. Since IdM uses a simplified, flat directory tree structure, this can be used to target some types of entries, like automount locations, which are containers or parent entries for other configuration. Example 27.3. Adding a Permission Based on Object Type There seven object types that can be used to form a permission: user group host service hostgroup netgroup dnsrecord Each type has its own set of allowed attributes, in a comma-separated list. The attributes ( --attrs ) must exist and be allowed attributes for the given object type, or the permission operation fails with schema syntax errors. 27.4.3. Creating New Privileges 27.4.3.1. Creating New Privileges from the Web UI Open the IPA Server tab in the top menu, and select the Role Based Access Control subtab. Select the Privileges task link. Click the Add link at the top of the list of privileges. Enter the name and a description of the privilege. Click the Add and Edit button to go to the privilege configuration page to add permissions. Select the Permissions tab. Click the Add link at the top of the list of permissions to add permission to the privilege. Click the checkbox by the names of the permissions to add, and click the right arrows button, >> , to move the permissions to the selection box. Click the Add button. 27.4.3.2. Creating New Privileges from the Command Line Privilege entries are created using the privilege-add command, and then permissions are added to the privilege group using the privilege-add-permission command. Create the privilege entry. Assign the desired permissions. For example:
[ "kinit admin ipa role-add --desc=\"User Administrator\" useradmin ------------------------ Added role \"useradmin\" ------------------------ Role name: useradmin Description: User Administrator", "ipa role-add-privilege --privileges=\"User Administrators\" useradmin Role name: useradmin Description: User Administrator Privileges: user administrators ---------------------------- Number of privileges added 1 ----------------------------", "ipa role-add-member --groups=useradmins useradmin Role name: useradmin Description: User Administrator Member groups: useradmins Privileges: user administrators ------------------------- Number of members added 1 -------------------------", "ipa permission-add \"manage Windows groups\" --filter=\"(!(objectclass=posixgroup))\" --permissions=write --attrs=description", "ipa permission-add \"manage automount locations\" --subtree=\"ldap://ldap.example.com:389/cn=automount,dc=example,dc=com\" --permissions=write --attrs=automountmapname,automountkey,automountInformation", "ipa permission-add \"manage service\" --permissions=all --type=service --attrs=krbprincipalkey,krbprincipalname,managedby", "ipa privilege-add \"managing filesystems\" --desc=\"for filesystems\"", "ipa privilege-add-permission \"managing filesystems\" --permissions=\"managing automount\",\"managing ftp services\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/defining-roles
Chapter 12. Etcd [operator.openshift.io/v1]
Chapter 12. Etcd [operator.openshift.io/v1] Description Etcd provides information to configure an operator to manage etcd. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 12.1.1. .spec Description Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 12.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. controlPlaneHardwareSpeed string ControlPlaneHardwareSpeed declares valid hardware speed tolerance levels generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 12.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 12.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 12.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 12.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 12.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 12.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 12.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/etcds DELETE : delete collection of Etcd GET : list objects of kind Etcd POST : create an Etcd /apis/operator.openshift.io/v1/etcds/{name} DELETE : delete an Etcd GET : read the specified Etcd PATCH : partially update the specified Etcd PUT : replace the specified Etcd /apis/operator.openshift.io/v1/etcds/{name}/status GET : read status of the specified Etcd PATCH : partially update status of the specified Etcd PUT : replace status of the specified Etcd 12.2.1. /apis/operator.openshift.io/v1/etcds HTTP method DELETE Description delete collection of Etcd Table 12.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Etcd Table 12.2. HTTP responses HTTP code Reponse body 200 - OK EtcdList schema 401 - Unauthorized Empty HTTP method POST Description create an Etcd Table 12.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.4. Body parameters Parameter Type Description body Etcd schema Table 12.5. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 202 - Accepted Etcd schema 401 - Unauthorized Empty 12.2.2. /apis/operator.openshift.io/v1/etcds/{name} Table 12.6. Global path parameters Parameter Type Description name string name of the Etcd HTTP method DELETE Description delete an Etcd Table 12.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 12.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Etcd Table 12.9. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Etcd Table 12.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.11. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Etcd Table 12.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.13. Body parameters Parameter Type Description body Etcd schema Table 12.14. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 401 - Unauthorized Empty 12.2.3. /apis/operator.openshift.io/v1/etcds/{name}/status Table 12.15. Global path parameters Parameter Type Description name string name of the Etcd HTTP method GET Description read status of the specified Etcd Table 12.16. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Etcd Table 12.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.18. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Etcd Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body Etcd schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK Etcd schema 201 - Created Etcd schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/etcd-operator-openshift-io-v1
Chapter 8. StorageVersionMigration [migration.k8s.io/v1alpha1]
Chapter 8. StorageVersionMigration [migration.k8s.io/v1alpha1] Description StorageVersionMigration represents a migration of stored data to the latest storage version. Type object 8.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the migration. status object Status of the migration. 8.1.1. .spec Description Specification of the migration. Type object Required resource Property Type Description continueToken string The token used in the list options to get the chunk of objects to migrate. When the .status.conditions indicates the migration is "Running", users can use this token to check the progress of the migration. resource object The resource that is being migrated. The migrator sends requests to the endpoint serving the resource. Immutable. 8.1.2. .spec.resource Description The resource that is being migrated. The migrator sends requests to the endpoint serving the resource. Immutable. Type object Property Type Description group string The name of the group. resource string The name of the resource. version string The name of the version. 8.1.3. .status Description Status of the migration. Type object Property Type Description conditions array The latest available observations of the migration's current state. conditions[] object Describes the state of a migration at a certain point. 8.1.4. .status.conditions Description The latest available observations of the migration's current state. Type array 8.1.5. .status.conditions[] Description Describes the state of a migration at a certain point. Type object Required status type Property Type Description lastUpdateTime string The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of the condition. 8.2. API endpoints The following API endpoints are available: /apis/migration.k8s.io/v1alpha1/storageversionmigrations DELETE : delete collection of StorageVersionMigration GET : list objects of kind StorageVersionMigration POST : create a StorageVersionMigration /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name} DELETE : delete a StorageVersionMigration GET : read the specified StorageVersionMigration PATCH : partially update the specified StorageVersionMigration PUT : replace the specified StorageVersionMigration /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name}/status GET : read status of the specified StorageVersionMigration PATCH : partially update status of the specified StorageVersionMigration PUT : replace status of the specified StorageVersionMigration 8.2.1. /apis/migration.k8s.io/v1alpha1/storageversionmigrations Table 8.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of StorageVersionMigration Table 8.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind StorageVersionMigration Table 8.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 8.5. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigrationList schema 401 - Unauthorized Empty HTTP method POST Description create a StorageVersionMigration Table 8.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.7. Body parameters Parameter Type Description body StorageVersionMigration schema Table 8.8. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 202 - Accepted StorageVersionMigration schema 401 - Unauthorized Empty 8.2.2. /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name} Table 8.9. Global path parameters Parameter Type Description name string name of the StorageVersionMigration Table 8.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a StorageVersionMigration Table 8.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 8.12. Body parameters Parameter Type Description body DeleteOptions schema Table 8.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified StorageVersionMigration Table 8.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.15. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StorageVersionMigration Table 8.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.17. Body parameters Parameter Type Description body Patch schema Table 8.18. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StorageVersionMigration Table 8.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.20. Body parameters Parameter Type Description body StorageVersionMigration schema Table 8.21. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 401 - Unauthorized Empty 8.2.3. /apis/migration.k8s.io/v1alpha1/storageversionmigrations/{name}/status Table 8.22. Global path parameters Parameter Type Description name string name of the StorageVersionMigration Table 8.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified StorageVersionMigration Table 8.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 8.25. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified StorageVersionMigration Table 8.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.27. Body parameters Parameter Type Description body Patch schema Table 8.28. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified StorageVersionMigration Table 8.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 8.30. Body parameters Parameter Type Description body StorageVersionMigration schema Table 8.31. HTTP responses HTTP code Reponse body 200 - OK StorageVersionMigration schema 201 - Created StorageVersionMigration schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/storage_apis/storageversionmigration-migration-k8s-io-v1alpha1
function::gettimeofday_ms
function::gettimeofday_ms Name function::gettimeofday_ms - Number of milliseconds since UNIX epoch Synopsis Arguments None Description This function returns the number of milliseconds since the UNIX epoch.
[ "gettimeofday_ms:long()" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-gettimeofday-ms